Explainable Artificial Intelligence - Demystifying the Hype

The field of Artificial Intelligence powered by Machine Learning and Deep Learning has gone through some phenomenal changes over the last decade. Starting off as just a pure academic and research-oriented domain, we have seen widespread industry adoption across diverse domains including retail, technology, healthcare, science and many more. More than often, the standard toolbox of machine learning, statistical or deep learning models remain the same. New models do come into existence like Capsule Networks, but industry adoption of the same usually takes several years. Hence, in the industry, the main focus of data science or machine learning is more ‘applied’ rather than theoretical and effective application of these models on the right data to solve complex real-world problems is of paramount importance.

A machine learning or deep learning model by itself consists of an algorithm which tries to learn latent patterns and relationships from data without hard-coding fixed rules. Hence, explaining how a model works to the business always poses its own set of challenges. There are some domains in the industry especially in the world of finance like insurance or banking where data scientists often end up having to use more traditional machine learning models (linear or tree-based). The reason being that model interpretability is very important for the business to explain each and every decision being taken by the model.However, this often leads to a sacrifice in performance. This is where complex models like ensembles and neural networks typically give us better and more accurate performance (since true relationships are rarely linear in nature).We, however, end up being unable to have proper interpretations for model decisions.

To address and talk about these gaps, I will take a conceptual yet hands-on approach where we will explore some of these challenges in-depth about explainable artificial intelligence (XAI) and human interpretable machine learning and even showcase with some examples using state-of-the-art model interpretation frameworks in Python!

 
4 favorite thumb_down thumb_up 0 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Outline/Structure of the Tutorial

The focus of this session is to demystify the hype behind the term 'Explainable AI' and talk about tangible concepts which can be leveraged using state-of-the-art tools and techniques to build human-interpretable models. We will be giving a conceptual overview of what Explainable AI or XAI entails followed by major strategies around XAI techniques. Once the audience gets some foundational knowledge around XAI, we will showcase some case-studies using hands-on examples in Python to build machine learning and deep learning models and leverage model interpretation and explanation strategies. Overall the talk will be structured as follows.

Part 1: The Importance of Human Interpretable Machine Learning

  • Understanding Machine Learning Model Interpretation
  • Importance of Machine Learning Model Interpretation
  • Criteria for Model Interpretation Methods
  • Scope of Model Interpretation

Part 2: Model Interpretation Strategies

  • Traditional Techniques for Model Interpretation
  • Challenges and Limitations of Traditional Techniques
  • The Accuracy vs. Interpretability trade-off
  • Model Interpretation Techniques

Part 3: Hands-on Model Interpretation — A comprehensive Guide

  • Hands-on guides on using the latest state-of-the-art model interpretation frameworks
  • Features, concepts and examples of using frameworks like ELI5, Skater and SHAP
  • Explore concepts and see them in action — Feature importances, partial dependence plots, surrogate models, interpretation and explanations with LIME, SHAP values
  • Hands-on Machine Learning Model Interpretation on a supervised learning example

Part 4: Hands-on Advanced Model Interpretation

  • Hands-on Model Interpretation on Unstructured Datasets
  • Advanced Model Interpretation on Deep Learning Models

Learning Outcome

Key Takeaways from this talk\tutorial

- Understand what is Explainable Artificial Intelligence

- Learn the latest and best techniques for building interpretable models and unbox the opacity of complex black-box models

- Learn how to leverage state-of-the-art model interpretation frameworks in Python

- Understand how to interpret models on both structured and unstructured data

Target Audience

Data Scientists, Engineers, Managers, AI Enthusiasts

Prerequisite

Participants are expected to know what is AI, Machine Learning and Deep Learning. Some basics around the Data Science lifecycle including data, features, modeling and evaluation.

Examples will be shown in Python so having a basic knowledge of Python helps.

schedule Submitted 1 month ago

Public Feedback

comment Suggest improvements to the Speaker

  • Liked Maulik Soneji
    keyboard_arrow_down

    Maulik Soneji - Using ML for Personalizing Food Recommendations

    Maulik Soneji
    Maulik Soneji
    Product Engineer
    Go-jek
    schedule 1 month ago
    Sold Out!
    45 Mins
    Talk
    Beginner

    GoFood, the food delivery product of Gojek is one of the largest of its kind in the world. This talk summarizes the approaches considered and lessons learnt during the design and successful experimentation of a search system that uses ML to personalize the restaurant results based on the user’s food and taste preferences .

    We formulated the estimation of the relevance as a Learning To Rank ML problem which makes the task of performing the ML inference for a very large number of customer-merchant pairs the next hurdle.
    The talk will cover our learnings and findings for the following:
    a. Creating a Learning Model for Food Recommendations
    b. Targetting experiments to a certain percentage of users
    c. Training the model from real time data
    d. Enriching Restaurant data with custom tags

    Our story should help the audience in making design decisions on the data pipelines and software architecture needed when using ML for relevance ranking in high throughput search systems.

  • Liked Dipanjan Sarkar
    keyboard_arrow_down

    Dipanjan Sarkar - A Hands-on Introduction to Natural Language Processing

    Dipanjan Sarkar
    Dipanjan Sarkar
    Data Scientist
    Red Hat
    schedule 1 month ago
    Sold Out!
    480 Mins
    Workshop
    Intermediate

    Data is the new oil and unstructured data, especially text, images and
    videos contain a wealth of information. However, due to the inherent
    complexity in processing and analyzing this data, people often refrain
    from spending extra time and effort in venturing out from structured
    datasets to analyze these unstructured sources of data, which can be a
    potential gold mine. Natural Language Processing (NLP) is all about
    leveraging tools, techniques and algorithms to process and understand
    natural language-based data, which is usually unstructured like text,
    speech and so on. In this workshop, we will be looking at tried and tested
    strategies, techniques and workflows which can be leveraged by
    practitioners and data scientists to extract useful insights from text data.


    Being specialized in domains like computer vision and natural language
    processing is no longer a luxury but a necessity which is expected of
    any data scientist in today’s fast-paced world! With a hands-on and interactive approach, we will understand essential concepts in NLP along with extensive case-
    studies and hands-on examples to master state-of-the-art tools,
    techniques and frameworks for actually applying NLP to solve real-
    world problems. We leverage Python 3 and the latest and best state-of-
    the-art frameworks including NLTK, Gensim, SpaCy, Scikit-Learn,
    TextBlob, Keras and TensorFlow to showcase our examples.


    In my journey in this field so far, I have struggled with various problems,
    faced many challenges, and learned various lessons over time. This
    workshop will contain a major chunk of the knowledge I’ve gained in the world
    of text analytics and natural language processing, where building a
    fancy word cloud from a bunch of text documents is not enough
    anymore. Perhaps the biggest problem with regard to learning text
    analytics is not a lack of information but too much information, often
    called information overload. There are so many resources,
    documentation, papers, books, and journals containing so much content
    that they often overwhelm someone new to the field. You might have
    had questions like ‘What is the right technique to solve a problem?’,
    ‘How does text summarization really work?’ and ‘Which are the best
    frameworks to solve multi-class text categorization?’ among many other
    questions! Based on my prior knowledge and learnings from publishing a couple of books in this domain, this workshop should help readers avoid the pressing
    issues I’ve faced in my journey so far and learn the strategies to master NLP.


    This workshop follows a comprehensive and structured approach. First it
    tackles the basics of natural language understanding and Python for
    handling text data in the initial chapters. Once you’re familiar with the
    basics, we cover text processing, parsing and understanding. Then, we
    address interesting problems in text analytics in each of the remaining
    chapters, including text classification, clustering and similarity analysis,
    text summarization and topic models, semantic analysis and named
    entity recognition, sentiment analysis and model interpretation. The last
    chapter is an interesting chapter on the recent advancements made in
    NLP thanks to deep learning and transfer learning and we cover an
    example of text classification with universal sentence embeddings.