menu MENU
Qing Qu
1301 Beal AvenueAnn Arbor, MI 48109-2122

News

GenAI diffusion models learn to generate new content more consistently than expected

Award-winning research led by Prof. Qing Qu discovered an intriguing phenomenon that diffusion models consistently produce nearly identical content starting from the same noise input, regardless of model architectures or training procedures.

News 2023

Improving generative AI models for real-world medical imaging

Professors Liyue Shen, Qing Qu, and Jeff Fessler are working to develop efficient diffusion models for a variety of practical scientific and medical applications.

Neural Collapse research seeks to advance mathematical understanding of deep learning

Led by Prof. Qing Qu, the project could influence the application of deep learning in areas such as machine learning, optimization, signal and image processing, and computer vision.

News

April 17, 2023

Qing Qu receives Amazon Research Award

Qu’s research project in the area of machine learning algorithms and theory is called “Principles of deep representation learning via neural collapse.” Awardees, who represent 54 universities in 14 countries, have access to Amazon public datasets, along with AWS AI/ML services and tools.

Miniature and durable spectrometer for wearable applications

A team led by P.C. Ku and Qing Qu have developed a miniature, paper-thin spectrometer measuring 0.16mm2 that can also withstand harsh environments.

Teaching Machine Learning in ECE

With new courses at the UG and graduate level, ECE is delivering state-of-the-art instruction in machine learning for students in ECE, and across the University

Qing Qu receives CAREER award to explore the foundations of machine learning and data science

His research develops computational methods for learning succinct representations from high-dimensional data.

Prof. Qing Qu uses data and machine learning to optimize the world

A new faculty member at Michigan, Qu’s research has applications in imaging sciences, scientific discovery, healthcare, and more.

I have been recognized as one of the best reviewers at NeurIPS’19, and invited as a mentor to the first new in ML workshop at NeurIPS

One paper has been accepted at NeurIPS’19 as spotlight (top 3%)

Our paper, titled A Nonconvex Approach for Exact and Efficient Multichannel Sparse Blind Deconvolution, has been accepted at NeurIPS’19 as spotlight (top 3%). This is a joint work with Xiao Li and Zhihui Zhu.

Two papers have been accepted at ICLR’20, with one oral presentation (top 1.85%)

Our paper, titled Geometric Analysis of Nonconvex Optimization Landscapes for Overcomplete Learning has been accepted at ICLR’20 as oral (top 1.85%). Our paper, titled Short-and-Sparse Deconvolution – A Geometric Approach has been accepted at ICLR’20 as poster (acceptance rate 26.5%).

A new review paper has been submitted to IEEE Signal Processing Magazine

The paper, titled Finding the Sparsest Vectors in a Subspace: Theory, Algorithms, and Applications, reviews recent work on nonconvex optimization methods for finding the sparsest vectors in linear subspaces. This is a joint work with Zhihui Zhu, Xiao Li, Manolis Tsakiris, John Wright and Rene Vidal.

Invited to be a speaker and organizer for Efficient Tensor Representations for Learning and Computational Complexity

The workshop will be held from May 17 – 21, 2021 at the Institute for Pure and Applied Mathematics (IPAM) situated on the UCLA campus. This workshop is part of a semester long program on Tensor Methods and Emerging Applications to the Physical and Data Sciences.

New paper submission

The paper, titled Robust Recovery via Implicit Bias of Discrepant Learning Rates for Double Over-parameterization, is submitted. This is a joint work with Chong You, Zhihui Zhu, and Yi Ma.

A review paper on nonconvex optimization is released

The paper, titled From Symmetry to Geometry: Tractable Nonconvex Problems, reviews recent advances on nonconvex optimization from a geometric perspective and landscape studies. This is a joint work with Yuqian Zhang and John Wright.