Rationalizing Deep Learning Model Decisions

Professor Wynne Hsu

National University of Singapore

Project Description

In this research, our aim is to provide good explanatory power without sacrificing the performance of machine learning models. The focus of this research is to develop a general unified framework that can serve as a principled basis for the interpretability of deep learning models in medical image classification applications. We will provide two-level interpretability in terms of decision confidence and decision justification. Decision confidence refers to a second-order estimate of confidence, separate from the existing first-order classification predictions. While a model may deem an input as likely to be from some particular class, over other competing class, it may also simultaneously be extremely unsure of this decision. For decision justification, we propose a novel linguistic justification generator that is model-based, intuitive and precise. In other words, the explanation is directly derived from the model being interpreted and indicates the features that influenced its decision.

Research Technical Area

Machine learning

Benefit to the society

The output of this research will increase the adoption rate of clinicians to AI systems.  

Team's Principal Investigator

Professor Wynne Hsu
School of Computing
National University of Singapore

Principal Investigator’s Core Research Technical Areas

  • Knowledge representation and reasoning
  • Machine learning
  • Natural language processing (NLP)

Introduction of the Principal Investigator

Wynne Hsu received her BSc in Computer Science at National University of Singapore and her M.Sc. and Ph.D in Electrical Engineering from Purdue University, West Lafayette, U.S.A., in 1989 and 1994, respectively. She is currently a Provost’s Chair Professor at the Department of Computer Science, School of Computing, National University of Singapore (NUS). Her research interests include: data analytics in the context of social networks, machine learning, as well as retina image analysis.

Recent Notable Awards

  • President’s Technology Award, Singapore, 2014.
  • SIGKDD Test-of-Time Award, 2014

Team

Co-Principal Investigators

Prof. Lee Mong Li

National University of Singapore

Research Areas:

  1. Knowledge representation and reasoning
  2. Machine learning
  3. Natural language processing (NLP)

Prof. Wong Tien Yin

National University of Singapore
Singapore National Eye Centre

Research Areas:

  1. Machine learning

Collaborators

Dr Gilbert Lim, National University of Singapore
A/Prof Rudy Setiono, National University of Singapore