Unlocking the Potential of Physics-Inspired Generative Models
- Yilun Xu, MIT
- Time: 2023-08-11 10:00
- Host: Prof. Yizhou Wang
- Venue: Online Talk
Abstract
In this talk, I will introduce a new family of physics-inspired generative models called PFGM++ that unifies diffusion models and Poisson Flow Generative Models (PFGM). Both PFGM and its improved version PFGM++ are derived from electrostatics. In the first half of the talk, I will briefly discuss the fundamental concept of vanilla PFGM (D=1 case), which interprets data points as electrical charges on the z=0 hyperplane in a space augmented with an additional dimension z. Following this, I will demonstrate how to arrive at the more general PFGM++ framework by extending the dimensionality of the augmented variable from 1 to a positive number D. PFGM++ reduces to PFGM when D=1 and to diffusion models when D approaches infinity. More intriguingly, interpolation between the two models reveals a sweet spot with new state-of-the-art performance in image generation. Furthermore, the flexibility offered by D enables us to balance robustness and training difficulty depending on the task and model at hand. Our experiments demonstrate that models with finite D can outperform previous best diffusion models on several image generation tasks, while exhibiting improved robustness.
During the talk, I will also cover a new training algorithm (Stable Target Field) and sampling algorithm (Restart Sampling). Though both algorithms originally emerge as byproducts during the development of PFGM/PFGM++, they can also enhance other physics-inpsired generative models. For example, Restart Sampling significantly boost the performance of the text-to-image Stable Diffusion model by combining the best of deterministic and stochastic sampling.
Biography
Yilun Xu is a third-year EECS PhD student at MIT CSAIL, advised by Tommi Jaakkola. He is also a research intern at NVIDIA for this summer. His research focuses on machine learning, with a current emphasis on on new family of physics-inspired generative models, as well as the development of training and sampling algorithms for diffusion models. He has published more than ten articles in top conferences such as ICML, NeurIPS, and ICLR, and has had several opportunities to give oral presentations. Previously, he has done research at Peking University CFCS and Stanford under the guidance of Yizhou Wang, Yuqing Kong and Stefano Ermon, working on bridging information theory and machine learning. Personal homepage: yilun-xu.com
- Admission
Zoom Meeting ID: 831 8347 7016
Passcode: 270469