menu MENU
Qing Qu
1301 Beal AvenueAnn Arbor, MI 48109-2122

Qing Qu is an assistant professor in the ECE Division of the Electrical Engineering and Computer Science department of the College of Engineering, University of Michigan – Ann Arbor. He is also affiliated with the Michigan Center for Applied and Interdisciplinary Mathematics (MCAIM), and the Michigan Institute for Data Science (MIDAS).

He received his B.E. degree from Tsinghua University, Beijing, China, in 2011, and obtained his Ph.D. degree from Columbia University with Prof. John Wright in 2018. He was a Moore-Sloan fellow at NYU Center for Data Science from 2018 to 2020. He is the recipient of Best Student Paper Award at SPARS’15 (with Ju Sun and John Wright), and the recipient of the 2016 Microsoft Ph.D. Fellowship in machine learning. He received the NSF Career Award in 2022.

His research interest lies in the intersection of signal processing, data science, machine learning, and numerical optimization. He is particularly interested in computational methods for learning low-complexity models from high-dimensional data, leveraging tools from machine learning, numerical optimization, and high-dimensional geometry, with applications in imaging sciences, scientific discovery, and healthcare. Recently, he is also interested in understanding deep networks through the lens of low-dimensional modeling. You can learn more about his research here.

Here is his Google Scholar Profile. Find him on LinkedIn and follow him on Twitter.

Selected Publications

Zhihui Zhu (equal), Tianyu Ding (equal), Jinxin Zhou, Xiao Li, Chong You, Jeremias Sulam, Qing Qu (2021). A Geometric Analysis of Neural Collapse with Unconstrained Features. (NeurIPS’21, spotlight, top 3%)
Preprint – PDFSlides – BibTex

Qing Qu, Yuexiang Zhai, Xiao Li, Yuqian Zhang, Zhihui Zhu (2020). Analysis of the Optimization Landscapes for Overcomplete Representation LearningICLR’20oral, top 1.9%.
Preprint – PDF – SlidesBibTex

Qing Qu, Xiao Li, Zhihui Zhu (2019). Exact Recovery of Multichannel Sparse Blind Deconvolution via Gradient DescentSIAM Journal on Imaging Science, 13(3): 1630–1652, 2020. (NeurIPS’19spotlight, top 3%).
Preprint – PDF – Code – Poster – SlidesBibTex

Yuqian Zhang, Qing Qu, John Wright (2020). From Symmetry to Geometry: Tractable Nonconvex ProblemsIn Submission to Proceedings of IEEE.
Preprint – PDF –  SlidesBibTex

Ju Sun, Qing Qu, John Wright (2018). A Geometric Analysis of Phase RetrievalFoundations of Computational Mathematics, 18(5):1131–1198, 2018. (ISIT’16).
Preprint – PDF – Code – SlidesBibTex

News

  • Giving educational lectures at ACDL‘23 (Jun. 2023)
  • Giving an educational course at ICASSP’23 (June. 2023)
  • Received a gift grant from Amazon Research Awards. (Mar. 2023)
  • Organizing and giving a talk at the 3rd SLowDNN Workshop from Jan. 3rd to 6th at MBZUAI, Abu Dhabi, MBZUAI. (Jan. 2023)
  • New paper released: we studied the transferability of deep representation via neural collapse, and proposed efficient fine-tuning methods for vision problems. (Dec. 2202)
  • My postdoc Yutong Wang received a highly competitive Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship, congrats Yutong! (Dec. 2022)
  • Received a gift grant from the KLA Corporation, and will be collaborating on unsupervised deep learning. (Dec. 2022)
  • New paper on local linear convergence analysis of neural collapse with unconstrained features (Dec. 2022)
  • Presented two papers at NeurIPS’22, and one paper at the OPT Workshop (Nov. 2022)
  • One paper has been accepted for EMNLP (joint work with Hongyuan Mei‘s group at TTIC), and one paper has been accepted by Nano Letters (joint work with Pei-Cheng Ku‘s group) (Oct. 2022).
  • Two papers on theoretical studies of neural collapse have been accepted for NeurIPS’22, paper1, paper2 (Sept 2022)
  • Received a PODS grant from MIDAS for supporting our research on “Machine Learning Guided Co-design for Reconstructive Spectroscopy”, joint with Pei-Cheng Ku (Jul. 2022).
  • Received a three-year grant from ONR for supporting our research on deep representation learning (Jun. 2022).
  • Received a new three-year NSF CISE medium grant in collaboration with MSU (1.2M in total), for supporting our research on unsupervised deep learning (Jun. 2022).
  • Taught a short course at ICASSP’22, see the website for more details. (May 2022)
  • Two papers have been accepted for ICML’22: paper1, paper2 (May 2022).
  • Received this year’s NSF Career Award, thanks NSF! See the news here and here. (Mar. 2022)
  • Two new papers are released on robust deep network training and neural collapse for MSE: paper1, paper2. (Mar. 2022)
  • Will be teaching a short educational course in ICASSP’22, titled “Low-Dimensional Models for High-Dimensional Data: From Linear to Nonlinear, Convex to Nonconvex, and Shallow to Deep” in May 2022, stay tuned. (Feb. 2022)
  • Organizer of Workshop on Seeking Low-dimensionality in Deep Neural Networks (SLowDNN), November 22-23, 2021 (website)
  • Three papers have been accepted at NeurIPS’21: paper1, paper2, paper3 (Sept. 2021)
  • New paper released on rank overspecified robust matrix recovery: subgradient method. (Sept. 2021)
  • Given a talk at the SPS Webinar Series [slides] [video], titled “From Shallow to Deep Representation Learning in Imaging and Beyond: Global Nonconvex Theory and Algorithms” (Sept 2021)
  • Gave a talk in the Workshop on Efficient Tensor Representations for Learning and Computational Complexity at IPAM UCLA, titled “Landscape Analysis for Overcomplete Tensor and Neural Collapse” [sildes] [video] (May 2021)
  • New paper released on global geometric analysis for neural collapse (May. 2021)
  • New paper released on convolutional normalization method for ConvNets (Mar. 2021)
  • Online talk at “运筹千里”纵横论坛 organized by Operation Research Society of China (Dec. 2nd, 2020) [Link]
  • Co-organizing a workshop on “Seeking Low-dimensionality in Deep Neural Networks (SLowDNN)“, Nov. 23rd – Nov. 24th. (Nov. 2020)
  • Elected as a member of the Computational Imaging (CI) TC for 2021-2023. (Nov. 2020)
  • Our paper has been accepted at NeurIPS’20 as spotlight (top 4%), which characterizes implicit bias with discrepant learning rates and builds connections between over-parameterization, RPCA, and deep neural networks (Sept. 2020)
  • Recognized as one of the best reviewers (top 33%) at ICML’20 (Sept. 2020)
  • Our grant proposal (joint with PI Carlos Fernandez-Granda) ‘‘Mathematical Analysis of Super-resolution via Nonconvex Optimization and Machine Learning’’ has been awarded by NSF DMS (Aug. 2020)
  • Two review papers on nonconvex optimization are released: Paper 1, Paper 2 (Jul. 2020)
  • Given a talk at TBSI-WOLT about my recent work on nonconvex optimization for learning low-complexity models: slides (Jun. 2020)
  • Two papers (Paper 1, Paper 2) have been accepted at ICLR’20, with one oral presentation (top 1.85%) (Dec. 2019)
  • One paper is released, provides (first) explicit convergence rate guarantees for a family of Riemannian subgradient methods for optimizing nonconvex nonsmooth functions over then Stiefel manifold. (Nov. 2019)
  • One paper has been accepted at NeurIPS’19 as spotlight (top 3%) (Sept. 2019)
  • Recognized as one of the best reviewers at NeurIPS’19, and invited as a mentor to the first new in ML workshop at NeurIPS’19. (Sept. 2019)