Qing Qu is an assistant professor in the ECE Division of the Electrical Engineering and Computer Science department of the College of Engineering, University of Michigan – Ann Arbor. (effective Jan. 2021)
He received his B.E. degree from Tsinghua University, Beijing, China, in 2011, and obtained his Ph.D. degree from Columbia University with Prof. John Wright in 2018. He was a Moore-Sloan fellow at NYU Center for Data Science from 2018 to 2020. He is the recipient of Best Student Paper Award at SPARS’15 (with Ju Sun and John Wright), and the recipient of the 2016 Microsoft Ph.D. Fellowship in machine learning.
His research interest lies in the intersection of signal processing, data science, machine learning, and numerical optimization. He is particularly interested in computational methods for learning low-complexity models from high-dimensional data, leveraging tools from machine learning, numerical optimization, and high dimensional geometry, with applications in imaging sciences, scientific discovery, and healthcare. Recently, he is also interested in understanding deep networks through the lens of low-dimensional modeling. You can learn more about his research here.
There are PhD positions opening in Fall 2021, here are more details.
- Co-organizing a workshop on “Seeking Low-dimensionality in Deep Neural Networks (SLowDNN)“, Nov. 23rd – Nov. 24th. (Nov. 2020)
- Elected as a member of the Computational Imaging (CI) TC for 2021-2023. (Nov. 2020)
- Our paper has been accepted at NeurIPS’20 as spotlight (top 4%), which characterizes implicit bias with discrepant learning rates and builds connections between over-parameterization, RPCA, and deep neural networks (Sept. 2020)
- Recognized as one of the best reviewers (top 33%) at ICML’20 (Sept. 2020)
- Our grant proposal (joint with PI Carlos Fernandez-Granda) ‘‘Mathematical Analysis of Super-resolution via Nonconvex Optimization and Machine Learning’’ has been awarded by NSF DMS (Aug. 2020)
- Two review papers on nonconvex optimization are released: Paper 1, Paper 2 (Jul. 2020)
- Given a talk at TBSI-WOLT about my recent work on nonconvex optimization for learning low-complexity models: slides (Jun. 2020)
- Two papers (Paper 1, Paper 2) have been accepted at ICLR’20, with one oral presentation (top 1.85%) (Dec. 2019)
- One paper is released, provides (first) explicit convergence rate guarantees for a family of Riemannian subgradient methods for optimizing nonconvex nonsmooth functions over then Stiefel manifold. (Nov. 2019)
- One paper has been accepted at NeurIPS’19 as spotlight (top 3%) (Sept. 2019)
- Recognized as one of the best reviewers at NeurIPS’19, and invited as a mentor to the first new in ML workshop at NeurIPS’19. (Sept. 2019)