AAAI 2020 Tutorial

Explainable AI:

Foundations, Industrial Applications, Practical Challenges, and Lessons Learned
A 5 hours and 15 minutes Tutorial
Saturday, February 8th, 2020
8:30 AM – 3:45 PM
Room: Sutton North
Download Slides

Overview

The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, clarity and understanding. XAI (eXplainable AI) aims at addressing such challenges by combining the best of symbolic AI and traditional Machine Learning. Such topic has been studied for years by all different communities of AI, with different definitions, evaluation metrics, motivations and results.

This tutorial is a snapshot on the work of XAI to date, and surveys the work achieved by the AI community with a focus on machine learning and symbolic AI related approaches. We will motivate the needs of XAI in real-world and large-scale applications, while presenting state-of-the-art techniques and best practices. In the first part of the tutorial, we give an introduction to the different aspects of explanations in AI. We then focus the tutorial on two specific approaches: (i) XAI using machine learning and (ii) XAI using a combination of graph-based knowledge representation and machine learning. For both we get into the specifics of the approach, the state of the art and the limitations and research challenges for the next steps. The final part of the tutorial gives an overview of real-world applications of XAI.

Outline

Part I: Introduction and Motivation - 08:30am - 20 minutes

Broad-spectrum introduction on explanation in AI. This will include describing and motivating the need for explainable AI techniques, from both theoretical and applicative standpoints. In this part we also summarize the prerequisites, and we introduce the different angles taken by the rest of the tutorial.

Part II: Explanation in AI (not only Machine Learning!) - 40 minutes

General overview of explanation in various field of AI (optimization, knowledge representation and reasoning, machine learning, search and constraint optimization, planning, natural language processing, robotics and vision) to align everyone on the various definitions of explanation. Evaluation of explainability will be also covered. The tutorial will cover most of definitions but will only go deep in the following areas: (i) Explainable Machine Learning, (ii) Explainable AI with Knowledge Graphs and Machine Learning.

Part III: Explainable Machine Learning (from a Machine Learning Perspective) - 75 minutes

In this section we tackle the broad problem of explainable machine learning pipelines. We describe the notion of explanation in the machine learning community, and we proceed by describing a number of popular techniques, mainly post-hoc explainability, explainability by design, example-based explanations, prototype-based explanations and evaluation of explanations. The core of this section is the analysis of different categories of black box problems, ranging from the black box model explanation, to black box outcome explanation.

Part IV: Explainable Machine Learning (from a Knowledge Graph Perspective) - 60 minutes

In this section of the tutorial we address the explanatory power of combining graph-based knowledge bases with machine learning approaches.

Part V: XAI Tools on Applications, Lessons Learnt and Research Challenges - 120 minutes

We will review some XAI open source and commercial tools applied in real-world examples. We focus on a number of use cases: i) explaining obstacle detection for autonomous trains; ii) an interpretable flight delay prediction system, with built-in explanation capabilities; iii) a wide-scale contract management system that predicts and explains the risk tier of corporate projects with semantic reasoning over knowledge graphs; iv) an expenses system that identifies, explains, and predict abnormal expense claims by employees of large organizations in 500+ cities; v) explanations of search and recommendation systems; vi) explaining sales predictions; vii) explaining lending decisions; viii) explaining fraud detection.

Schedule

Part I: Introduction and Motivation - 20 minutes

[08:30am - 08:50am]

Part II: Explanation in AI (not only Machine Learning!) - 40 minutes

[08:50am - 09:30am]

Part III: Explainable Machine Learning (from a Machine Learning Perspective) - 45 minutes

[09:30am - 10:15am]

Break - 30 minutes

[10:15am - 10:45am]

Part III: Explainable Machine Learning (from a Machine Learning Perspective) Cont'd - 30 minutes

[10:45am - 11:15am]

Part IV: Explainable Machine Learning (from a Knowledge Graph Perspective) - 60 minutes

[11:15am - 12:15pm]

Part V: XAI Tools on Applications, Lessons Learnt and Research Challenges - 15 minutes

[12:15pm - 12:30pm]

Break - 90 minutes

[12:30am - 2:00pm]

Part V: XAI Tools on Applications, Lessons Learnt and Research Challenges Cont'd - 105 minutes

[2:00pm - 3:45pm]

Presenters

Freddy Lecue

Freddy Lecue (PhD 2008, Habilitation 2015) is the Chief Artificial Intelligence (AI) Scientist at CortAIx (Centre of Research & Technology in Artificial Intelligence eXpertise) @Thales in Montreal, Canada. He is also a research associate at INRIA, in WIMMICS, Sophia Antipolis - France. Before joining Thales he was principal scientist and research manager in Artificial Intelligent systems, systems combining learning and reasoning capabilities, in Accenture Technology Labs, Dublin - Ireland. Before joining Accenture Labs, he was a Research Scientist at IBM Research, Smarter Cities Technology Center (SCTC) in Dublin, Ireland, and lead investigator of the Knowledge Representation and Reasoning group. His main research interests are Explainable AI systems. The application domain of his current research is Smarter Cities, with a focus on Smart Transportation and Building. In particular, he is interested in exploiting and advancing Knowledge Representation and Reasoning methods for representing and inferring actionable insight from large, noisy, heterogeneous and big data. He has over 40 publications in refereed journals and conferences related to Artificial Intelligence (AAAI, ECAI, IJCAI, IUI) and Semantic Web (ESWC, ISWC), all describing new system to handle expressive semantic representation and reasoning. He co-organized the first three workshops on semantic cities (AAAI 2012, 2014, 2015, IJCAI 2013), and the first two tutorial on smart cities at AAAI 2015 and IJCAI 2016. Prior to joining IBM, Freddy Lecue was a Research Fellow (2008-2011) with the Centre for Service Research at The University of Manchester, UK. He has been awarded by a second prize for his Ph.D thesis by the French Association for the Advancement of Artificial Intelligence in 2009, and has been recipient of the Best Research Paper Award at the ACM/IEEE Web Intelligence conference in 2008.

Krishna Gade

Krishna Gade is the founder and CEO of Fiddler Labs, an enterprise startup building an explainable AI engine to address problems regarding bias, fairness, and transparency in AI. An entrepreneur and engineering leader with a strong technical experience of creating scalable platforms and delightful consumer products, Krishna previously held senior engineering leadership roles at Facebook, Pinterest, Twitter, and Microsoft. He has given several invited talks at prominent practitioner forums, including a talk on addressing bias, fairness, and transparency in AI at Strata Data Conference, 2019.

Sahin Cem Geyik

Sahin Cem Geyik has been part of the Careers/Talent AI teams at LinkedIn over the past three years, focusing on personalized and fairness-aware recommendations across several LinkedIn Talent Solutions products. Prior to LinkedIn, he was a research scientist at Turn Inc., an online advertising startup which was later acquired by Amobee, a subsidiary of Singtel. He received his Ph.D. degree in Computer Science from Rensselaer Polytechnic Institute in 2012, and his Bachelor degree in Computer Engineering in 2007 at Bogazici University, Istanbul/Turkey. Sahin worked on various research topics in ML spanning over Online Advertising Models and Algorithms, Recommender and Search Systems, Fairness-aware ML, and Explainability. He also has performed extensive research in Systems domain, which resulted in multiple publications in Ad-hoc/Sensor Networks and Service-Oriented Architecture fields. Sahin has authored papers in several top-tier conferences and journals such as KDD, WWW, INFOCOM, SIGIR, ICDM, CIKM, IEEE TMC, IEEE TSC, and presented his work in multiple external venues..

Krishnaram Kenthapadi

Krishnaram Kenthapadi is a Principal Scientist at Amazon AWS AI, where he leads the fairness, explainability, and privacy initiatives in Amazon AI platform. Until recently, he led similar efforts across different LinkedIn applications as part of the LinkedIn AI team, and served as LinkedIn's representative in Microsoft's AI and Ethics in Engineering and Research (AETHER) Advisory Board. He shaped the technical roadmap and led the privacy/modeling efforts for LinkedIn Salary product, and prior to that, served as the relevance lead for the LinkedIn Careers and Talent Solutions Relevance team, which powers search/recommendation products at the intersection of members, recruiters, and career opportunities. Previously, he was a Researcher at Microsoft Research Silicon Valley, where his work resulted in product impact (and Gold Star / Technology Transfer awards), and several publications/patents. Krishnaram received his Ph.D. in Computer Science from Stanford University in 2006, and his Bachelors in Computer Science from IIT Madras. He serves regularly on the program committees of KDD, WWW, WSDM, and related conferences, and co-chaired the 2014 ACM Symposium on Computing for Development. He received Microsoft's AI/ML conference (MLADS) distinguished contribution award, NAACL best thematic paper award, CIKM best case studies paper award, SODA best student paper award, and WWW best paper award nomination. He has published 40+ papers, with 2500+ citations and filed 140+ patents (30+ granted). He has presented lectures/tutorials on privacy, fairness, and explainable AI in industry at forums such as KDD '18 '19, WSDM '19, WWW '19, and FAccT '20, and instructed a course on AI at Stanford.

Varun Mithal

Varun Mithal is an AI researcher at LinkedIn, where he works on jobs and hiring recommendations. Prior to joining LinkedIn, he received his PhD in Computer Science from University of Minnesota-Twin Cities, and his Bachelors in Computer Science from Indian Institute of Technology, Kanpur. He has developed several algorithms to identify rare classes and anomalies using unsupervised change detection as well as supervised learning from weak labels. His thesis also explored machine learning models for scientific domains that incorporate physics-based constraints and makes them interpretable for domain scientists. He has published 20 papers with 350+ citations. His work has appeared in top-tier data mining conferences and journals such as IEEE TKDE, AAAI, and ICDM..

Ankur Taly

Ankur Taly is the Head of Data Science at Fiddler labs, where he is responsible for developing and evangelizing core explainable AI technology. Previously, he was a Staff Research Scientist at Google Brain where he carried out research in explainable AI, and was most well-known for his contribution to developing and applying Integrated Gradients (220+ citations) — a new interpretability algorithm for Deep Networks. His research in this area has resulted in publications at top-tier machine learning conferences (ICML 2017, ACL 2018), and prestigious journals like the American Academy of Ophthalmology (AAO) and Proceedings of the National Academy of Sciences (PNAS). He also given invited talks (Slides, Video) at several academic and industrial venues, including, UC Berkeley (DREAMS seminar), SRI International, Dagstuhl seminar, and Samsung AI Research. Besides explainable AI, Ankur has a broad research background, and has published 25+ papers in several other areas including Computer Security, Programming Languages, Formal Verification, and Machine Learning. He has served on several conference program committees (PLDI 2014 and 2019, POST 2014, PLAS 2013), taught guest lectures at graduate courses, and instructed a short course on distributed authorization at the FOSAD summer school in 2016. Ankur obtained his Ph.D. in computer science from Stanford University in 2012 and a B. Tech in CS from IIT Bombay in 2007..

Riccardo Guidotti

Riccardo Guidotti is currently a post-doc researcher at the Department of Computer Science University of Pisa, Italy and a member of the Knowledge Discovery and Data Mining Laboratory (KDDLab), a joint research group with the Information Science and Technology Institute of the National Research Council in Pisa. Riccardo Guidotti was born in 1988 in Pitigliano (GR) Italy. In 2013 and 2010 he graduated cum laude in Computer Science (MS and BS) at University of Pisa. He received the PhD in Computer Science with a thesis on Personal Data Analytics in the same institution. He won the IBM fellowship program and has been an intern in IBM Research Dublin, Ireland in 2015. His research interests are in personal data mining, clustering, explainable models, analysis of transactional data related to recipes and to migration flows.

Pasquale Minervini

Pasquale Minervini is a Research Associate at University College London (UCL), United Kingdom, working with the Machine Reading group led by Prof. Sebastian Riedel. He received a Ph.D. in Computer Science from University of Bari, Italy, with a thesis titled "Mining Methods for the Web of Data", advised by Prof. Nicola Fanizzi. After obtaining his Ph.D., Pasquale worked as a postdoctoral researcher at University of Bari, Italy, and at the INSIGHT Centre for Data Analytics (INSIGHT), Galway, Ireland. At INSIGHT, he worked in the Knowledge Engineering and DIscovery (KEDI) group, composed by researchers and engineers from INSIGHT and Fujitsu Ireland Research and Innovation. Over the course of his research career, Pasquale published 29 peer-reviewed papers, including in top-tier AI conferences (such as UAI, AAAI, ICDM, CoNLL, ECML, and ESWC), receiving two best paper awards. He is the main inventor of a patent application assigned to Fujitsu Ltd. For more information about him, see http://www.neuralnoise.com