AIGC beyond Images: 3D and Video Synthesis
- Dr. Qifeng Chen, HKUST
- Time: 2023-01-11 14:00
- Host: Dr. Hao Dong
- Venue: Online Talk
We have witnessed the great advancement of AI-generated content (AIGC), such as DALL·E 2 and Stable Diffusion, which can synthesize photorealistic or artistic images from a text description. What will be the steps for AIGC? Will 3D or video synthesis be the next major breakthrough? In this talk, I will share some of my research on 3D and video synthesis, with generative models of GANs and diffusion models. I will talk about some key designs that lead to substantial improvement in scene-level and object-level 3D synthesis, automatic 3D avatar generation and editing, and infinitely long video synthesis in the wild.
Qifeng Chen is an assistant professor at The Hong Kong University of Science and Technology. He received his Ph.D. in computer science from Stanford University in 2017. His research interests include image processing and synthesis, 3D vision, and autonomous driving. He was named one of 35 Innovators under 35 in China by MIT Technology Review and received the Google Faculty Research Award in 2018. He has published more than 70 papers in AI-related top international conferences or journals, including CVPR, ICCV, ECCV, TPAMI, ICML, NeurIPS, AAAI, ACM Multimedia, ICRA, IROS, and CoRL. He is the associate editor of IROS 2020-2022. He has served as the Area Chair of CVPR and on the senior program committee of AAAI and IJCAI. He won 2nd place worldwide at the ACM-ICPC World Finals and a gold medal in IOI.
Zoom Meeting ID: 852 0069 0066