通知公告
通知公告
CFCS Youth Talks

Interpretable Representation Learning for Visual Intelligence

  • Bolei Zhou, Massachusetts Institute of Technology
  • Time: 2018-04-02 10:25
  • Host: Prof. Baoquan Chen
  • Venue: Room 101, Courtyard No.5, Jingyuan

Abstract

Recent progress of deep neural networks in computer vision and machine learning has enabled transformative applications across robotics, healthcare, and security. However, despite the superior performance of the deep neural networks, it remains challenging to understand their inner workings and explain their output predictions. My research has pioneered several novel approaches for opening up the “black box” of neural networks used in vision tasks. In this talk, I will first show that objects and other meaningful concepts emerge as a consequence of recognizing scenes. A network dissection approach is further introduced to automatically identify the emergent concepts and quantify their interpretability. Then I will describe an approach that can efficiently explain the output prediction for any given image. It sheds light on the decision-making process of the networks and why they succeed or fail. Finally, I will talk about ongoing efforts toward learning efficient and interpretable deep representations for video event understanding and applications in robotics and medical image analysis.

Biography

Bolei Zhou is a doctoral candidate in computer science at Massachusetts Institute of Technology. His research is in computer vision and machine learning, focusing on visual recognition and interpretable deep learning. He received the Facebook Fellowship, Microsoft Research Fellowship, MIT Greater China Fellowship, and his research was featured in media outlets such as TechCrunch, Quartz, and MIT News.