Qing Qu is an assistant professor in the ECE Division of the Electrical Engineering and Computer Science department of the College of Engineering, University of Michigan – Ann Arbor. He is also affiliated with the Michigan Institute for Data Science (MIDAS), the Michigan Center for Applied and Interdisciplinary Mathematics (MCAIM), and the Michigan Institute for Computational Discovery and Engineering (MICDE).
Short Bio: Dr. Qu received my B.E. degree from Tsinghua University, Beijing, China, in 2011, and obtained my Ph.D. degree from Columbia University with Prof. John Wright in 2018. He was a Moore-Sloan fellow at NYU Center for Data Science from 2018 to 2020. His work has been recognized by a couple of awards, including a Microsoft Ph.D. Fellowship in machine learning in 2016, an NSF Career Award in 2022, an Amazon AWS AI Award in 2023, a UM CHS Junior Faculty Award in 2025, and a Google Research Scholar Award in 2025. He was one of the founding organizers of the Conference on Parsimony and Learning (CPAL), area chairs of ICML, NeurIPS, and ICLR, and action editor of TMLR.
Research Interest: Broadly speaking, our research interest lies in the intersection of signal processing, data science, machine learning, and numerical optimization. In particular, I am interested in computational methods for learning low-complexity models from high-dimensional data. The current research of our group focuses on (i) the foundations of generative AI (Slides), (ii) deep representation learning (Slides-1, Slides-2), and (iii) machine learning for scientific applications.
Opening: I am always looking for self-motivated students to work with and our group typically hires 1-2 students per year, depending on the funding situation. Please fill out the form and send me an email if you are interested. Additionally, I am looking for postdocs through (i) MICDE Research Scholars, (ii) MIDAS fellow, and (iii) Schmidt AI in Science programs.
Recent Highlights & Upcoming Events:
- Tutorial (Website): Presenting a 2.5-hour tutorial at ICML’25 on “Harnessing Low Dimensionality in Diffusion Models: From Theory to Practice” with Prof. Liyue Shen and Prof. Yuxin Chen, (Jul. 2025)
- Award: Received the Google Research Scholar Award in machine learning (Jun. 2025)
- New Paper Release on Foundations and Interpretability of Generative AI (May 2025):
- Out-of-Distribution Generalization of In-Context Learning: A Low-Dimensional Subspace Perspective
- Understanding Generalization in Diffusion Models via Probability Flow Distance
- Towards Understanding the Mechanisms of Classifier-Free Guidance
- The Dual Power of Interpretable Token Embeddings: Jailbreaking Attacks and Defenses for Diffusion Model Unlearning
- Group Member News: our lab members (1) Yifu Lu will be joining Princeton ECE as a PhD student, (2) Jinfan Zhou will be joining UChicago CS as a PhD student, (3) Pengyu Li will be joining Duke ECE as a PhD student, (4) Siyi Chen will be joining UMich ECE as a PhD student, (5) Dr. Peng Wang will be joining University of Macau as an assistant professor in Fall 2025.
- Upcoming Tutorial: Presenting a full-day tutorial at ICCV’25 on “Learning Deep Low-dimensional Models from High-Dimensional Data: From Theory to Practice”, with Dr. Sam Buchanan, Profs. Yi Ma, Liyue Shen, Atlas Wang, and Zhihui Zhu. (Oct. 2025)
- SIAM News Article on understanding generalization of diffusion models (news, Apr. 2025)
- Group Member News: Congrats to Can Yaras and Siyi Chen on receiving the Rackham Predoctoral Fellowship! (news1, news2) (Mar. 2025)
- New Paper Release on generalizability and controllability of diffusion models (slides, presentation)
Recent Selected Publications:
- Can Yaras, Peng Wang, Laura Balzano, Qing Qu (2024). Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation. International Conference on Machine Learning (ICML’24), 2024. (Oral, top 1.5%, best poster award at MMLS’24)
Preprint – PDF – BibTex – Code - Huijie Zhang*, Jinfan Zhou*, Yifu Lu, Minzhe Guo, Liyue Shen, Qing Qu (2023). The Emergence of Reproducibility and Consistency in Diffusion Models. International Conference on Machine Learning (ICML’24), 2024. (Best Paper Award at NeurIPS’23 Workshop on Diffusion Models, news)
Preprint – PDF – BibTex – Slides – Website - Zhihui Zhu*, Tianyu Ding*, Jinxin Zhou, Xiao Li, Chong You, Jeremias Sulam, Qing Qu (2021). A Geometric Analysis of Neural Collapse with Unconstrained Features. Neural Information Processing Systems (NeurIPS’21), 2021. (spotlight, top 3%)
Preprint – PDF – Slides – BibTex – Code – Video - Qing Qu, Yuexiang Zhai, Xiao Li, Yuqian Zhang, Zhihui Zhu (2020). Analysis of the Optimization Landscapes for Overcomplete Representation Learning. International Conference on Learning Representations (ICLR’20), 2020. (Oral, top 1.9%)
Preprint – PDF – Slides – BibTex - Qing Qu, Xiao Li, Zhihui Zhu (2019). Exact Recovery of Multichannel Sparse Blind Deconvolution via Gradient Descent. SIAM Journal on Imaging Science, 13(3): 1630–1652, 2020. (NeurIPS’19, spotlight, top 3%).
Preprint – PDF – Code – Poster – Slides – BibTex - Ju Sun, Qing Qu, John Wright (2018). A Geometric Analysis of Phase Retrieval. Foundations of Computational Mathematics, 18(5):1131–1198, 2018. (ISIT’16)
Preprint – PDF – Code – Slides – BibTex
Funding Acknowledgement
Our group acknowledges generous support from the National Science Foundation (NSF), Office of Naval Research (ONR), Army Research Office (ARO), Amazon Research, KLA Corporation, MICDE, and MIDAS







Contacts
Office: 4227, 1301 Beal Avenue, Ann Arbor, MI, 48109-2122