Agent Learning in the Emergence of Complex World
- Yali Du, University College London
- Time: 2020-04-05 15:40
- Host: Prof. Yizhou Wang
- Venue: Online Talk
Over the past few years, we have witnessed a great success of AI in many applications, including image classification, recommendation systems, etc. This success has shared a common paradigm in principle, learning from static datasets with inputs and outputs. Nowadays, we are experiencing a paradigm shift from pattern recognition to decision making. Instead of learning the knowledge from static datasets, we are learning through the feedback of our knowledge. Especially since machine learning models are deployed in the real-world; these systems start having impacts on each other, turning their decision making into a multi-agent problem. Therefore, agent learning in a complex world is a fundamental problem for the next generation of AI to empower various multi-agent environments.
As case studies, we present GridNet, which can flexibly control an arbitrary number of agents, and LIIR, which generates diversified behaviors when receiving only a team reward from a cooperative multi-agent system. Our novel methods achieve the new state-of-the-art on the testbed of StarCraft II environments, which has recently emerged as a challenging RL benchmark task with high stochasticity, large state-action space, and delayed rewards. In the end, I will discuss some future directions in multi-agent learning.
Dr. Yali Du is a postdoctoral research fellow at University College London and a visiting researcher at Huawei London Research Lab. She obtained her Ph.D. in AI in 2019 from University of Technology Sydney, under the supervision of Dacheng Tao. Her research interest lies in how to address multi-agent problems, including controlling with flexibility and diversity, the emergence of interaction, multi-agent credit assignment, and robustness of agent learning. Her research output has been widely published in prestigious venues including ICML, NeurIPS, IJCAI, ACM MM, IEEE TMM, etc.