Deep Learning and Medical Imaging Applications: Challenges and New Approaches
- Prof. Danny Z. Chen, University of Notre Dame
- Time: 2019-02-27 10:00
- Host: Prof. Xiaotie Deng
- Venue: Room 102, Courtyard No.5, Jingyuan
New technologies for acquiring very large amounts of medical image data put ever increasing demand on effective approaches for medical image processing tasks. In recent years, deep learning (DL) techniques have achieved remarkably high quality solutions for many medical imaging applications, largely outperforming traditional image processing methods. DL methods commonly use lots of labeled (annotated) data for model training. While natural scene images are normally 2D images, medical images can be in 2D, 3D, and even higher dimensions. 3D medical image processing presents new challenges to deep learning techniques. (1) 3D images are often of very large size, and thus incur incredibly high costs to process; but, GPUs have only limited memory for implementing DL models. (2) Currently, no efficient techniques (automatic or manual) for annotating 3D images are known. Furthermore, usually only trained medical experts can annotate medical images well, which makes medical image annotation a highly costly and labor-intensive process (even for 2D images). Therefore, how to obtain sufficient good quality annotated image data for DL model training while significantly reducing manual annotation effort is a major bottleneck to the successful development of effective DL approaches for medical imaging applications. Also, new effective and efficient DL approaches for processing 3D medical images are critically desired.
In this talk, we present new DL-based approaches for considerably alleviating the annotation burden for medical image segmentation: A new scheme for improving the effectiveness of manual annotation by selecting the most useful object samples to annotate, a new method for improving the efficiency of manual annotation by allowing inexact rough labeling, and a new end-to-end DL model for 3D instance segmentation based on weak annotation. Further, we present several new methods for segmentation of 3D medical images (including special hardware based solutions).
Dr. Danny Z. Chen (陈子仪) received the B.S. degrees in Computer Science and in Mathematics from the University of San Francisco, California, USA in 1985, and the M.S. and Ph.D. degrees in Computer Science from Purdue University, West Lafayette, Indiana, USA in 1988 and 1992, respectively. He has been on the faculty of the Department of Computer Science and Engineering, the University of Notre Dame, Indiana, USA since 1992, and is currently a Professor. Dr. Chen's main research interests include computational biomedicine, biomedical imaging, computational geometry, algorithms and data structures, machine learning, data mining, and VLSI. He has worked extensively with biomedical researchers and practitioners, published many journal papers and peer-reviewed conference papers in these areas, and holds 5 US patents for technology development in computer science and engineering and biomedical applications. He received the CAREER Award of the US National Science Foundation (NSF) in 1996, a Laureate Award in the 2011 Computerworld Honors Program for developing "Arc-Modulated Radiation Therapy" (a new radiation cancer treatment approach), and the 2017 PNAS Cozzarelli Prize of the US National Academy of Sciences. He is a Fellow of IEEE and a Distinguished Scientist of ACM.