Introduction to Anomaly Detection in Data

There are always some students in a classroom who either outperform the other students or failed to even pass with a bare minimum when it comes to securing marks in subjects. Most of the times, the marks of the students are generally normally distributed apart from the ones just mentioned. These marks can be termed as extreme highs and extreme lows respectively. In Statistics and other related areas like Machine Learning, these values are referred to as Anomalies or Outliers.
The very basic idea of anomalies is really centered around two values - extremely high values and extremely low values. Then why are they given importance? In this session, we will try to investigate questions like this. We will see how they are created/generated, why they are important to consider while developing machine learning models, how they can be detected. We will also do a small case study in Python to even solidify our understanding of anomalies.
 
 

Outline/Structure of the Talk

  • A dive into the wild: Anomalies in the real world
  • Find the odd ones out: Anomalies in data
  • Generation of anomalies in data
  • Different types of anomalies
  • How anomalies affect the performance of an ML model
  • Utilizing anomalies in ML models
  • A case study of anomaly detection in Python

Learning Outcome

By the end of this session, the attendees will have a good background on anomalies and they will also have an idea about the basics techniques to tackle anomalies (along with the introduction to PyOD).

Target Audience

Data Science Enthusiasts, Data Science Practitioners, Machine Learning Beginners

Prerequisites for Attendees

Basic familiarity with Machine Learning would be ideal

schedule Submitted 11 months ago

Public Feedback


    • Liked Sayak Paul
      keyboard_arrow_down

      Sayak Paul - Interpretable Machine Learning - Fairness, Accountability and Transparency in ML systems

      45 Mins
      Talk
      Beginner

      The good news is building fair, accountable, and transparent machine learning systems is possible. The bad news is it’s harder than many blogs and software package docs would have you believe. The truth is nearly all interpretable machine learning techniques generate approximate explanations, that the fields of eXplainable AI (XAI) and Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) are very new, and that few best practices have been widely agreed upon. This combination can lead to some ugly outcomes!

      This talk aims to make your interpretable machine learning project a success by describing fundamental technical challenges you will face in building an interpretable machine learning system, defining the real-world value proposition of approximate explanations for exact models, and then outlining the following viable techniques for debugging, explaining, and testing machine learning models:

      • Model visualizations including decision tree surrogate models, individual conditional expectation (ICE) plots, partial dependence plots, and residual analysis.
      • Reason code generation techniques like LIME, Shapley explanations, and Tree-interpreter. *Sensitivity Analysis. Plenty of guidance on when, and when not, to use these techniques will also be shared, and the talk will conclude by providing guidelines for testing generated explanations themselves for accuracy and stability.
    • Liked Sayak Paul
      keyboard_arrow_down

      Sayak Paul / Anubhav Singh - End-to-end project on predicting collective sentiment for programming language using StackOverflow answers

      90 Mins
      Tutorial
      Intermediate

      In the world of a plethora of programming languages, and a diverse population of developers working on them, an interesting question is posed - “How happy are the developers of any given language?”.

      It is often that sentiment for a language creeps into the StackOverflow answer provided by any user. With an ability to perform sentiment analysis on the user's answers, we can take a step forward to aggregate the average sentiment on the factor of language. This conveniently answers our question of interest.

      The presenters create an end-to-end project which begins with pulling data from the StackOverflow API, making the collective sentiment prediction model and eventually deploying it as an API on the GCP Compute Engine.