个人简介
青年副研究员。2019年获得上海交通大学博士学位,之后在香港科技大学从事博士后研究工作。其主要研究成果发表于JMLR, NeurIPS, ICML, SIGKDD等国际期刊及会议上。
近期论文
查看导师新发文章
(温馨提示:请注意重名现象,建议点开原文通过作者单位确认)
Preprints
Yunyan Bai, Yuxing Liu, Luo Luo. On the Complexity of Finite-Sum Smooth Optimization under the Polyak–Lojasiewicz Condition. arXiv preprint:2402.02569, 2024.
Chengchang Liu, Cheng Chen, Luo Luo. Symmetric Rank-k Methods. arXiv preprint:2303.16188, 2023.
Chengchang Liu, Luo Luo. Regularized Newton Methods for Monotone Variational Inequalities with H lder Continuous Jacobians. arXiv preprint:2212.07824, 2022.
Lesi Chen, Luo Luo. Near-Optimal Algorithms for Making the Gradient Small in Stochastic Minimax Optimization. arXiv preprint:2208.05925, 2022.
Conference Publications
Lesi Chen, Haishan Ye, Luo Luo. An Efficient Stochastic Algorithm for Decentralized Nonconvex-Strongly-Concave Minimax Optimization. International Conference on Artificial Intelligence and Statistics (AISTATS), 2024.
Zhuanghua Liu, Luo Luo, Bryan Kian Hsiang Low. Incremental Quasi-Newton Methods with Faster Superlinear Convergence Rates. AAAI Conference on Artificial Intelligence (AAAI), 2024.
Zhenwei Lin, Jingfan Xia, Qi Deng, Luo Luo. Decentralized Gradient-Free Methods for Stochastic Non-Smooth Non-Convex Optimization. AAAI Conference on Artificial Intelligence (AAAI), 2024.
Haikuo Yang, Luo Luo, Chris Junchi Li, Michael I. Jordan, Maryam Fazel. Accelerating Inexact HyperGradient Descent for Bilevel Optimization. Workshop on Optimization for Machine Learning (NeurIPS Workshop), 2023.
Chengchang Liu, Cheng Chen, Luo Luo, John C.S. Lui. Block Broyden's Methods for Solving Nonlinear Equations. Advances in Neural Information Processing Systems (NeurIPS), 2023.
Chengchang Liu, Lesi Chen, Luo Luo, John C.S. Lui. Communication Efficient Distributed Newton Method with Fast Convergence Rates. ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2023.
Lesi Chen, Jing Xu, Luo Luo. Faster Gradient-Free Algorithms for Nonsmooth Nonconvex Stochastic Optimization. International Conference on Machine Learning (ICML), 2023.
Chengchang Liu, Luo Luo. Quasi-Newton Methods for Saddle Point Problems. Advances in Neural Information Processing Systems (NeurIPS), 2022.
Lesi Chen, Boyuan Yao, Luo Luo. Faster Stochastic Algorithms for Minimax Optimization under Polyak- ojasiewicz Condition. Advances in Neural Information Processing Systems (NeurIPS), 2022.
Luo Luo, Yujun Li, Cheng Chen. Finding Second-Order Stationary Points in Nonconvex-Strongly-Concave Minimax Optimization. Advances in Neural Information Processing Systems (NeurIPS), 2022.
Chengchang Liu, Shuxian Bi, Luo Luo, John C.S. Lui. Partial-Quasi-Newton Methods: Efficient Algorithms for Minimax Optimization Problems with Unbalanced Dimensionality. ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2022. Best Paper Runner-Up
Luo Luo, Cheng Chen, Guangzeng Xie, Haishan Ye. Revisiting Co-Occurring Directions: Sharper Analysis and Efficient Algorithm for Sparse Matrices. AAAI Conference on Artificial Intelligence (AAAI), 2021.
Luo Luo, Haishan Ye, Zhichao Huang, Tong Zhang. Stochastic Recursive Gradient Descent Ascent for Stochastic Nonconvex-Strongly-Concave Minimax Problems. Advances in Neural Information Processing Systems (NeurIPS), 2020.
Cheng Chen, Luo Luo, Weinan Zhang, Yong Yu. Efficient Projection-Free Algorithms for Saddle Point Problems. Advances in Neural Information Processing Systems (NeurIPS), 2020.
Haishan Ye, Ziang Zhou, Luo Luo, Tong Zhang. Decentralized Accelerated Proximal Gradient Descent. Advances in Neural Information Processing Systems (NeurIPS), 2020.
Guangzeng Xie, Luo Luo, Yijiang Lian, Zhihua Zhang. Lower Complexity Bounds for Finite-Sum Convex-Concave Minimax Optimization Problems. International Conference on Machine Learning (ICML), 2020.
Cheng Chen, Luo Luo, Weinan Zhang, Yong Yu, Yijiang Lian. Efficient and Robust High-Dimensional Linear Contextual Bandits. International Joint Conference on Artificial Intelligence (IJCAI), 2020.
Luo Luo, Wenpeng Zhang, Zhihua Zhang, Wenwu Zhu, Tong Zhang, Jian Pei. Sketched Follow-The-Regularized-Leader for Online Factorization Machine. ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2018.
Haishan Ye, Luo Luo, Zhihua Zhang. Approximate Newton Methods and Their Local Convergence. International Conference on Machine Learning (ICML), 2017.
Zihao Chen, Luo Luo, Zhihua Zhang. Communication Lower Bounds for Distributed Convex Optimization: Partition Data on Features. AAAI Conference on Artificial Intelligence (AAAI), 2017.
Tianfan Fu, Luo Luo, Zhihua Zhang. Quasi-Newton Hamiltonian Monte Carlo. Conference on Uncertainty in Artificial Intelligence (UAI), 2016.
Qiaomin Ye, Luo Luo, Zhihua Zhang. Frequent Direction Algorithms for Approximate Matrix Multiplication with Applications in CCA. International Joint Conference on Artificial Intelligence (IJCAI), 2016.
Luo Luo, Yubo Xie, Zhihua Zhang, Wu-Jun Li. Support Matrix Machines. International Conference on Machine Learning (ICML), 2015.
Zhiquan Liu, Luo Luo, Wu-Jun Li. Robust Crowdsourced Learning. IEEE International Conference on Big Data, 2013.
Journal Publications
Haishan Ye, Luo Luo, Ziang Zhou, Tong Zhang. Multi-Consensus Decentralized Accelerated Gradient Descent. Journal of Machine Learning Research (JMLR), 24(306):1-50, 2023.
Haishan Ye, Luo Luo, Zhihua Zhang. Approximate Newton Methods. Journal of Machine Learning Research (JMLR), 22(66):1-41, 2021.
Haishan Ye, Luo Luo, Zhihua Zhang. Accelerated Proximal Sub-Sampled Newton Method. IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2020.
Haishan Ye, Luo Luo, Zhihua Zhang. Nesterov's Acceleration for Approximate Newton. Journal of Machine Learning Research (JMLR), 21(142):1-37, 2020.
Luo Luo, Cheng Chen, Zhihua Zhang, Wu-Jun Li, Tong Zhang. Robust Frequent Directions with Application in Online Learning. Journal of Machine Learning Research (JMLR), 20(45):1-41, 2019.
Haishan Ye, Guangzeng Xie, Luo Luo, Zhihua Zhang. Fast Stochastic Second-Order Method Logarithmic in Condition Number. Pattern Recognition, 88:629-642, 2019.
Shusen Wang, Luo Luo, Zhihua Zhang. SPSD Matrix Approximation vis Column Selection: Theories, Algorithms and Extensions. Journal of Machine Learning Research (JMLR), 17(49):1-49, 2016.