Learning Tracking Control for Physics-based Characters using Reinforcement Learning
- Libin Liu, DeepMotion
- Time: 2019-04-04 14:30
- Host: Prof. Yizhou Wang
- Venue: Room 204, Courtyard No.5, Jingyuan
Given a robust control system, simulation offers the potential for interactive human characters that respond naturally to the actions of the user or changes in the environment. However, designing controllers to realize complex human movements remains a challenge for physics-based character animation. In this talk, we will cover a few approaches for learning tracking controllers from reference motions to realized natural and physically realistic simulation of human motions. Specifically, we will introduce methods for learning linear and non-linear feedback policies using reinforcement learning to achieve robust tracking control. Then we will discuss a few applications based on the learned control policies, including training a controller to realize basketball dribbling motions. At last, we will discuss the limitations of tracking control and introduce a method for scheduling tracking control fragment to achieve more robust and interactive animation.
Libin Liu is currently the Chief Scientist of DeepMotion Inc. (https://deepmotion.com/). Before joining DeepMotion Inc., he was a R&D postdoctoral associate in Disney Research, Pittsburgh in 2016-2017 and a postdoctoral research fellow in the Imager Laboratory at the University of British Columbia in 2015. He received his Ph.D. degree in computer science in 2014 and B.S. degree in mathematics and physics in 2009 from Tsinghua University. His research interests include character animation, physics-based simulation, motion control, and related areas such as optimal control, reinforcement learning, and deep learning. He served on the program committees of major computer graphics conferences, including ACM SIGGRAPH, ACM SIGGRAPH Asia (posters and technical briefs), Pacific Graphics, ACM SIGGRAPH/Eurographics Symposium on Computer Animation, Motion In Games, etc.