Category-Level Object Perception for Physical Interaction
- Dr. He Wang, Stanford University
- Time: 2021-01-29 11:00
- Host: Dr. Libin Liu
- Venue: Online Talk
Deep neural networks have shown great success both in semantic perception tasks, e.g. object recognition and semantic segmentation, and in end-to-end perception for reinforcement learning and robotic tasks. However, it is still unclear how to bridge these two perception paradigms to gain a deep semantic and interaction-driven understanding of physical interaction. In this talk, I will focus on how to explore categorical actionable information for the sake of perceiving and understanding physical interactions. I will show that learning high-level semantic actionable information, e.g. object state, can help with action planning. Then, I will introduce the problem of estimating category-level 6D pose and 3D size for rigid objects. This category pose can be seen as low-level actionable information and can benefit object manipulation tasks. Furthermore, I will present my works on curating an articulated object dataset and estimating category-level articulated object pose. I will conclude the talk by discussing current research topics and future directions on learning-based 3D computer vision.
He Wang is a senior PhD student at Stanford University under the supervision of Prof. Leonidas Guibas. He will be joining the Center on Frontiers of Computing Studies at Peking University as a tenure-track assistant professor in July 2021. His research interests span across computer vision, geometric computing, and robotics. In his PhD he contributed to generative modeling of human object interactions and opened up a new direction in estimating category-level pose and size for rigid and articulated objects. He receives Eurographics 2019 best paper honorable mention award and three of his works are accepted as CVPR oral presentations. Prior to his PhD he obtained his bachelor in Microelectronics from Tsinghua University.