Interpretable Machine Learning - Fairness, Accountability and Transparency in ML systems

The good news is building fair, accountable, and transparent machine learning systems is possible. The bad news is it’s harder than many blogs and software package docs would have you believe. The truth is nearly all interpretable machine learning techniques generate approximate explanations, that the fields of eXplainable AI (XAI) and Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) are very new, and that few best practices have been widely agreed upon. This combination can lead to some ugly outcomes!

This talk aims to make your interpretable machine learning project a success by describing fundamental technical challenges you will face in building an interpretable machine learning system, defining the real-world value proposition of approximate explanations for exact models, and then outlining the following viable techniques for debugging, explaining, and testing machine learning models:

  • Model visualizations including decision tree surrogate models, individual conditional expectation (ICE) plots, partial dependence plots, and residual analysis.
  • Reason code generation techniques like LIME, Shapley explanations, and Tree-interpreter. *Sensitivity Analysis. Plenty of guidance on when, and when not, to use these techniques will also be shared, and the talk will conclude by providing guidelines for testing generated explanations themselves for accuracy and stability.
 
3 favorite thumb_down thumb_up 3 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Outline/Structure of the Talk

  • What is Machine Learning Interpretability?
  • Why Should You Care About Machine Learning Interpretability?
  • Why is Machine Learning Interpretability Difficult?
  • What is the Value Proposition of Machine Learning Interpretability?
  • How Can Machine Learning Interpretability Be Practiced? (several examples)
  • Can Machine Learning Interpretability Be Tested?
  • General recommendations
  • Tool-based observations

Learning Outcome

By the end of the session and the attendees will have a clear idea of the importance of fairness, accountability and transparency in ML and it stands up in real-world scenarios. They will also get to see some real examples justifying the importance of interpretability of ML systems. They will get know about some of the tools that are used in this regard (such as LIME, Shapley etc.).

Target Audience

Machine Learning enthusiasts/practitioners who are trying to explain their models.

Prerequisites for Attendees

Basic familiarity with machine learning concepts.

schedule Submitted 6 months ago

Public Feedback

comment Suggest improvements to the Speaker
  • Kuldeep Jiwani
    By Kuldeep Jiwani  ~  4 months ago
    reply Reply

    Hi Sayak,

    Your proposal seems to be well thought of from multiple angles and seems to provide a good practical value.

    Just wanted to know some more details of your presentation on interpretability. It seems you would be covering various existing techniques like surrogate models, LIME, SHAP, etc. Do you also have a plan to share with audience, which technique works better in certain cases than the other and how to choose amongst these techniques?

    • Sayak Paul
      By Sayak Paul  ~  4 months ago
      reply Reply

      Hi Kuldeep,

      Yes I have thought show examples where going with LIME might be better than going to SHAP values. And thank you for your kind words. :)

      • Kuldeep Jiwani
        By Kuldeep Jiwani  ~  4 months ago
        reply Reply

        Thanks for the clarification


  • Liked Sayak Paul
    keyboard_arrow_down

    Sayak Paul - Introduction to Anomaly Detection in Data

    Sayak Paul
    Sayak Paul
    Data Science Instructor
    DataCamp
    schedule 4 months ago
    Sold Out!
    45 Mins
    Talk
    Intermediate
    There are always some students in a classroom who either outperform the other students or failed to even pass with a bare minimum when it comes to securing marks in subjects. Most of the times, the marks of the students are generally normally distributed apart from the ones just mentioned. These marks can be termed as extreme highs and extreme lows respectively. In Statistics and other related areas like Machine Learning, these values are referred to as Anomalies or Outliers.
    The very basic idea of anomalies is really centered around two values - extremely high values and extremely low values. Then why are they given importance? In this session, we will try to investigate questions like this. We will see how they are created/generated, why they are important to consider while developing machine learning models, how they can be detected. We will also do a small case study in Python to even solidify our understanding of anomalies.
  • Liked Sayak Paul
    keyboard_arrow_down

    Sayak Paul / Anubhav Singh - End-to-end project on predicting collective sentiment for programming language using StackOverflow answers

    90 Mins
    Tutorial
    Intermediate

    In the world of a plethora of programming languages, and a diverse population of developers working on them, an interesting question is posed - “How happy are the developers of any given language?”.

    It is often that sentiment for a language creeps into the StackOverflow answer provided by any user. With an ability to perform sentiment analysis on the user's answers, we can take a step forward to aggregate the average sentiment on the factor of language. This conveniently answers our question of interest.

    The presenters create an end-to-end project which begins with pulling data from the StackOverflow API, making the collective sentiment prediction model and eventually deploying it as an API on the GCP Compute Engine.