个人简介
Zhi-Qin John Xu, 许志钦
Tenure-track Associate Professor, 副教授
Institute of Natural Sciences, 自然科学研究院
School of Mathematical Sciences, 数学科学学院
Shanghai Jiao Tong University, 上海交通大学
Zhi-Qin John Xu is an associate professor at the Institute of Natural Sciences/School of Mathematical Sciences, Shanghai Jiao Tong University. Zhi-Qin graduated from Zhiyuan College of Shanghai Jiao Tong University in 2012. In 2016, he graduated from Shanghai Jiao Tong University with a doctor's degree in applied mathematics. From 2016 to 2019, he was a postdoctoral fellow at NYU ABU Dhabi and the Courant Institute. He and collaborators discovered frequency principle, parameter condensation and embedding principles in deep learning, and developed multi-scale neural networks. He published papers as the first author or corresponding author at JMLR, AAAI, NeurIPS, SIMODS, CiCP, CSIAM Trans. Appl. Math., JCP, Combustion and Flame, Eur. J. Neurosci. etc. Currently, he is one of the managing Editors of Journal of Machine Learning.
许志钦,上海交通大学自然科学研究院/数学科学学院长聘教轨副教授。2012年本科毕业于上海交通大学致远学院。2016年博士毕业于上海交通大学,获应用数学博士学位。2016年至2019年,在纽约大学阿布扎比分校和柯朗研究所做博士后。与合作者共同发现深度学习中的频率原则、参数凝聚和能量景观嵌入原则,发展多尺度神经网络等。以第一作者或者通讯作者身份发表论文于JMLR,AAAI,NeurIPS,SIMODS,CiCP,CSIAM Trans. Appl. Math.,JCP, Combustion and Flame,Eur. J. Neurosci.等学术期刊和会议。现为Journal of Machine Learning的managing editor之一。
New Journal: Journal of Machine Learning
We launched a new journal: Journal of Machine Learning (JML). Welcome to submit papers to JML.
Editor-in-Chief: Prof. Weinan E
Scope: Journal of Machine Learning (JML) publishes high quality research papers in all areas of machine learning, including innovative algorithms of machine learning, theories of machine learning, important applications of machine learning in AI, natural sciences, social sciences, and engineering etc. The journal emphasizes a balanced coverage of both theory and practice. The journal is published in a timely fashion in electronic form.
WHY: Although the world is generally over-populated with journals, the field of machine learning (ML) is one exception. In mathematics, we just do not have a recognized venue (other than conference proceedings) for publishing our work on ML. In AI for Science, ideally, we would like to publish our work in leading scientific journals such as Physical Review Letters. However, this is a difficult task when we are at the stage of developing methodologies. Although there are many conferences in ML-related areas, publishing in journal form is still the preferred venue in many disciplines.
The objective for Journal of Machine Learning (JML) is to become a leading journal in all areas related to ML, including algorithms and theory for ML, as well as applications to science and AI. JML will start as a quarterly publication. Considering the fact that ML is a vast and fast-developing field, we will do our best to carry out a thorough and responsive review process. To this end, we will have a group of young and active managing editors who will handle the review process, and a large, interdisciplinary group of experienced board members who can offer quick opinions and suggest reviewers when needed.
Open access: Yes.
Fee: NO.
Sponsor: Center for Machine Learning Research, Peking University & AI for Science Institute, Beijing
Publisher: Global Science Press, but the editorial board owns the journal.
Editorial Board
materials
A suggested notation for machine learning (通用机器学习符号) published by BAAI (北京智源), see page in github or the page in BAAI
slides at 第一届机器学习与科学应用大会 CSML2022.
slides at CSIAM 2020
B站课程
github code
研究领域
查看导师新发文章
(温馨提示:请注意重名现象,建议点开原文通过作者单位确认)
I am interested in understanding deep learning from training process, loss landscape, generalization and application. For example, we found a Frequency Principle (F-Principle) that deep neural networks (DNNs) often capture target functions from low frequency to high frequency in order during the training. The overview paper of frequency principle is now in: ArXiv 2201.07395; we found an embedding principle that the loss landscape of a DNN “contains” all the critical points of all the narrower DNNs.
I am also interested in computational neuroscience, ranging from theoretical study and simulation to data analysis.