Explaining AI with the Right Level of Abstraction

Associate Professor Jun Sun

Singapore Management University

Project Description

Despite the exceptional performance of Deep Neural Networks, it is important that we develop “explainable AI”. Explainable AI is critical in multiple ways, e.g., to gain trust from human users, and to produce casual models for proposing model improvement. The challenge is that we must be able to explain AI at a level tailored to the limited bandwidth of humans.

We aim to address the problem by adopting a classic concept in computer science and human reasoning, i.e., abstraction, as a primary method to reduce the complicated models used in AI application into simple, transparent and explainable ones. Fundamentally, human brains are designed to understand and reason abstractly depending on the task to solve, i.e., building a simple model with only very relevant information in order to make certain decision. We thus aim to build our solutions for explaining AI at the right level of abstraction (i.e., in an application-dependent and probabilistic way). 

Research Technical Areas

Knowledge representation and reasoning

Reasoning under uncertainty

Benefits to the society

The outcome of the project will have potential impact on safety/security and reliability of AI systems across many application domains.

Team's Principal Investigator

Associate Professor Jun Sun
Singapore Management University

Introduction of the Principal Investigator

Dr. SUN, Jun is currently an associate professor at Singapore Management University (SMU). He received Bachelor and PhD degrees in computing science from National University of Singapore (NUS) in 2002 and 2006. In 2007, he received the prestigious LEE KUAN YEW postdoctoral fellowship. He has been a faculty member since 2010 and was a visiting scholar at MIT from 2011-2012. Jun’s research interests include software engineering, cyber-security, formal methods and artificial intelligence.

Recent Notable Awards

  • Technology Cooperation Excellence Award, by Huawei, 2019.
  • ACM distinguished paper award, 2018
  • The 20 Year ICFEM Most Influential System Award for Developing the PAT Verification System, 2018
  1.  

Team

Co-Principal Investigator

Dr. Lu Wei

Singapore University of Technology and Design

Research Areas:

  1. Machine learning
  2. Natural language processing
  3. Artificial intelligence

Collaborators

Dr. Wang Jingyi , National University of Singapore
Research Interests: Formal methods, security and AI