个人简介
王强,哈尔滨工业大学(深圳)计算机科学与技术学院助理教授。2014年于华南理工大学计算机科学与工程学院取得工学学士学位,2020年于香港浸会大学计算机系取得博士学位,期间获得香港博士政府奖学金HKPFS资助。2020年~2022年曾于香港浸会大学计算机系担任研究助理教授(RAP)。主要研究方向为GPU计算、节能计算、分布式并行计算、高效深度学习等。在TPDS,EuroSys, ICDCS,INFOCOM,CVPR,ECCV,ACMMM,AAAI,ICRA,ACM e-Energy等国际顶级会议和期刊发表论文20多篇。担任多个学术服务,包含多个顶会(如AAAI,ICRA等)程序委员会成员/审稿人以及多个顶刊(如TPDS,TNSE,TODAES等)审稿人。
Biography
2022.05-present, Assistant Professor, Harbin Institute of Technology (Shenzhen)
2022.01-2022.05, Senior Engineer, Tencent (Shenzhen)
2020.09-2022.01, Research Assistant Professor, Hong Kong Baptist University
2015-2020, Ph.D., Hong Kong Baptist University, supervised by Prof. X.-W Chu
2010-2014, B.E., Computer Science and Technology, South China University of Technology, China
Work Experience
2020.06-2020.08, Research Intern, Alibaba Ant Financial Services (蚂蚁金服)
2019.08-2020.05, Research Intern, Nvidia Corporation
2019-2020, Research Assistant, Hong Kong Baptist University, supervised by Prof. X.-W Chu
2015-2019, Teaching Assistant, Department of Computer Science, Hong Kong Baptist University.
2014-2015, Research Assistant, Hong Kong Baptist University, supervised by Prof. X.-W Chu
Awards
2020, RPg Performance Award Scheme, Hong Kong Baptist University
2016-2019, Excellent Teaching Assistant Performance Award
2015, Hong Kong PhD Fellowship
2013, American Mathematical Contest in Modeling, Honorable Prize
2011, National Scholarship
近期论文
查看导师新发文章
(温馨提示:请注意重名现象,建议点开原文通过作者单位确认)
Conference
Y. Zhang, Q. Wang*, Z. Lin, P. Xu, B. Wang, “Improving GPU Energy Efficiency through an Application-transparent Frequency Scaling Policy with Performance Assurance”, The European Conference on Computer Systems (EuroSys), 2024.
Z. Tang, Y. Wang, X. He, L. Zhang, X. Pan, Q. Wang, R. Zeng, S. Shi, B. He, and X.-W. Chu, “FusionAI: Decentralized Training and Deploying LLMs with Massive Consumer-Level GPUs,” Symposium on Large Language Models (LLM 2023) with IJCAI 2023, Macao, China, August 21, 2023.
R. Zhang, J. Chen, Q. Wang, “Explicify Neural Implicit Fields for Efficient Dynamic Human Avatar Modeling via a Neural Explicit Surface”, ACM International Conference on Multimedia (ACMMM), 2023.
Q. Yan?, Q. Wang?, K. Zhao, B. Li, X. Chu, F. Deng, “Rethinking Disparity: A depth range free Multi-View Stereo based on Disparity”, AAAI Conference on Artificial Intelligence (AAAI), 2023.
Q. Yan, Q. Wang, K. Zhao, B. Li, X. Chu, F. Deng, “SphereDepth: Panorama Depth Estimation from Spherical Domain”, International Conference on 3D Vision (3DV), 2022. (CCF C)
Q. Wang, S. Shi, K. Zhao and X.-W. Chu, “EASNet:Searching Elastic and Accurate Network Architecture for Stereo Matching,” European Conference on Computer Vision (ECCV), 2022.
Q. Wang, S. Zheng, Q. Yan, F. Deng, K. Zhao, and X.-W. Chu, “IRS: A Large Naturalistic Indoor Robotics Stereo Dataset to Train Deep Models for Disparity and Surface Normal Estimation”, IEEE International Conference on Multimedia and Expo (ICME), 2021. (Oral:15%)
S. Zhang, Z. Wang, Q. Wang, J. Zhang, G. Wei, and X.-W Chu, “EDNet: Efficient Disparity Estimation with Cost Volume Combination and Attention-based Spatial Residual”, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
Y. Wang, Q. Wang, and X.-W. Chu, “Energy-efficient Inference Service of Transformer-based Deep Learning Models on GPUs,” IEEE GreenCom, Greece, 2020. (Best Paper Award)
Q. Wang, S. Shi, S. Zheng, K. Zhao, and X.-W Chu, “FADNet: A Fast and Accurate Network for Disparity Estimation”. International Conference on Robotics and Automation (ICRA), 2020.
S. Shi, Z. Tang, Q. Wang, K. Zhao, and X.-W. Chu, “Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees,” The 24th European Conference on Artificial Intelligence (ECAI), Santiago de Compostela, Spain, June 2020.
S. Shi, Q. Wang, X-W. Chu, B. Li, Y. Qin, R. Liu, and X. Zhao, “Communication-Efficient Distributed Deep Learning with Merged Gradient Sparsification on GPUs,” IEEE INFOCOM 2020, Beijing, China, May 2020.
S. Shi, Q. Wang, and X.-W. Chu, “Efficient Sparse-Dense Matrix-Matrix Multiplication on GPUs Using the Customized Sparse Storage Format,” IEEE ICPADS 2020, Hong Kong, China, Dec 2020.
Y. Wang, Q. Wang, S. Shi, X. He, Z. Tang, K. Zhao, and X.-W Chu. “Benchmarking the Performance and Power of AI Accelerators for AI Training.”, 3rd High Performance Machine Learning Workshop (HPML 2020), co-located with IEEE CCGrid 2020, Melbourne, Australia, 2020.
Q. Wang, C. Liu, and X.-W Chu, “GPGPU Performance Estimation for Frequency Scaling Using Cross-Benchmarking” Proceedings of the 13th Workshop on General Purpose Processing Using GPUs (GPGPU), 2020.
Z. Tang, Y. Wang, Q. Wang, and X.-W. Chu, “The Impact of GPU DVFS on the Energy and Performance of Deep Learning: an Empirical Study,” ACM e-Energy 2019, Phoenix, AZ, USA, June 2019. (notes paper)
S. Shi, K. Zhao, Q. Wang, Z. Tang, and X.-W. Chu, “A Convergence Analysis of Distributed SGD with Communication-Efficient Gradient Sparsification,” IJCAI 2019, Macau, P.R.C., August 2019.
S. Shi, Q. Wang, K. Zhao, Z. Tang, Y. Wang, X. Huang, and X.-W. Chu, “A Distributed Synchronous SGD Algorithm with Global Top-k Sparsification for Low Bandwidth Networks,” IEEE ICDCS 2019, Dallas, Texas, USA, July 2019.
S. Shi, Q. Wang, X.-W. Chu, and B. Li, “A DAG Model of Synchronous Stochastic Gradient Descent in Distributed Deep Learning,” IEEE International Conference on Parallel and Distributed Systems (ICPADS) 2018, Singapore, Dec 2018.
S. Shi, Q. Wang, and X.-W. Chu, “Performance Modeling and Evaluation of Distributed Deep Learning Frameworks on GPUs,” IEEE DataCom 2018, Athens, Greece, August 2018. (Best Paper Award)
Q. Wang and X.-W. Chu, “GPGPU Performance Estimation with Core and Memory Frequency Scaling,” IEEE International Conference on Parallel and Distributed Systems (ICPADS) 2018, Singapore, Dec 2018. [A poster of this work has been presented at The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC18), Dallas, USA, Nov 2018.]
Q. Wang, P. Xu, Y. Zhang, and X.-W. Chu, “EPPMiner: An Extended Benchmark Suite for Energy, Power and Performance Characterization of Heterogeneous Architecture,” ACM e-Energy 2017, Hong Kong, May 2017. (Best Paper Finalist)
S. Shi, Q. Wang, P. Xu, and X.-W. Chu, “Benchmarking State-of-the-Art Deep Learning Software Tools,” the 7th International Conference on Cloud Computing and Big Data (CCBD 2016), Macau, China, Nov 2016.
Journal
Q. Wang, X. Mei, X.-W. Chu, H. Liu, Y.-W. Leung, and Z. Li, “Energy-aware Non-preemptive Task Scheduling with Deadline Constraint in DVFS-enabled Heterogeneous Clusters,” IEEE Transactions on Parallel and Distributed Systems (TPDS), 2022.
W. Xing, J. Chen, Z. Yang, Q. Wang, Y. Guo, “Scale-Consistent Fusion: From Heterogeneous Local Sampling to Global Immersive Rendering”, IEEE Transactions on Image Processing (TIP), 2022.
Y. Wang, Q. Wang and X.-W. Chu, “Energy-efficient Online Scheduling of Transformer Inference Services on GPU Servers,” IEEE Transactions on Green Communications and Networking (TGCN), 2022.
Q. Wang and X.-W, Chu, “GPGPU Performance Estimation with Core and Memory Frequency Scaling,” IEEE Transactions on Parallel and Distributed Systems, Vol. 31, No. 12, pages 2865-2881, Dec 2020.
C. Liu?, Q. Wang? and X.-W, Chu, “ESetStore: an Erasure-coded Storage System with Fast Data Recovery,” IEEE Transactions on Parallel and Distributed Systems (TPDS). 2020.
C. Liu, Q. Wang, X.-W. Chu, and Y.-W. Leung, “G-CRS: GPU Accelerated Cauchy Reed-Solomon Coding,” IEEE Transactions on Parallel and Distributed Systems (TPDS), Vol. 29, No. 7, pages 1482-1498, July 2018.
Q. Wang and X.-W. Chu, “GPGPU Power Estimation with Core and Memory Frequency Scaling,” ACM SIGMETRICS Performance Evaluation Review, October 2017.
X. Mei, Q. Wang, and X.-W. Chu, “A Survey and Measurement Study of GPU DVFS on Energy Conservation,” Digital Communications and Networks, 2017.