Events
Events
CS Peer Talks

Learning Mixture Models via Efficient High-dimensional Sparse Fourier Transforms

  • Shuchen Li, Yale University
  • Time: 2026-04-21 15:00
  • Host: Turing Class Research Committee
  • Venue: Room 204, Courtyard No.5, Jingyuan

Abstract

In this work, we give a poly(d,k) time and sample algorithm for efficiently learning the parameters of a mixture of k spherical distributions in d dimensions. Unlike all previous methods, our techniques apply to heavy-tailed distributions and include examples that do not even have finite covariances. Our method succeeds whenever the cluster distributions have a characteristic function with sufficiently heavy tails. Such distributions include the Laplace distribution but crucially exclude Gaussians. All previous methods for learning mixture models relied implicitly or explicitly on the low-degree moments. Even for the case of Laplace distributions, we prove that any such algorithm must use super-polynomially many samples. Our method thus adds to the short list of techniques that bypass the limitations of the method of moments. Somewhat surprisingly, our algorithm does not require any minimum separation between the cluster means. This is in stark contrast to spherical Gaussian mixtures where a minimum ℓ2-separation is provably necessary even information-theoretically [Regev and Vijayaraghavan '17]. Our methods compose well with existing techniques and allow obtaining ''best of both worlds" guarantees for mixtures where every component either has a heavy-tailed characteristic function or has a sub-Gaussian tail with a light-tailed characteristic function. Our algorithm is based on a new approach to learning mixture models via efficient high-dimensional sparse Fourier transforms. We believe that this method will find more applications to statistical estimation. As an example, we give an algorithm for consistent robust mean estimation against noise-oblivious adversaries, a model practically motivated by the literature on multiple hypothesis testing. It was formally proposed in a recent Master's thesis by one of the authors, and has already inspired follow-up works. Based on joint work with Alkis Kalavasis, Pravesh Kothari, and Manolis Zampetakis.

Biography

微信图片_2026-04-10_172116_357.png

Shuchen Li is a second-year Ph.D. student in Computer Science at Yale University, advised by Manolis Zampetakis and Ilias Zadik. Before Yale, he received his B.S. in Computer Science from the Turing class, Peking University, and his M.S. in Computer Science from Carnegie Mellon University. His research interests lie broadly in the intersection of high-dimensional statistics, theoretical machine learning, and computational complexity. Recently, he is thinking about problems in algorithmic statistics, learning theory, and computational-statistical trade-offs.