Causal questions are ubiquitous in data science. For e.g. questions such as, did changing a feature in a website lead to more traffic or if digital ad exposure led to incremental purchase are deeply rooted in causality.

Randomized tests are considered to be the gold standard when it comes to getting to causal effects. However, experiments in many cases are unfeasible or unethical. In such cases one has to rely on observational (non-experimental) data to derive causal insights. The crucial difference between randomized experiments and observational data is that in the former, test subjects (e.g. customers) are randomly assigned a treatment (e.g. digital advertisement exposure). This helps curb the possibility that user response (e.g. clicking on a link in the ad and purchasing the product) across the two groups of treated and non-treated subjects is different owing to pre-existing differences in user characteristic (e.g. demographics, geo-location etc.). In essence, we can then attribute divergences observed post-treatment in key outcomes (e.g. purchase rate), as the causal impact of the treatment.

This treatment assignment mechanism that makes causal attribution possible via randomization is absent though when using observational data. Thankfully, there are scientific (statistical and beyond) techniques available to ensure that we are able to circumvent this shortcoming and get to causal reads.

The aim of this talk, will be to offer a practical overview of the above aspects of causal inference -which in turn as a discipline lies at the fascinating confluence of statistics, philosophy, computer science, psychology, economics, and medicine, among others. Topics include:

  • The fundamental tenets of causality and measuring causal effects.
  • Challenges involved in measuring causal effects in real world situations.
  • Distinguishing between randomized and observational approaches to measuring the same.
  • Provide an introduction to measuring causal effects using observational data using matching and its extension of propensity score based matching with a focus on the a) the intuition and statistics behind it b) Tips from the trenches, basis the speakers experience in these techniques and c) Practical limitations of such approaches
  • Walk through an example of how matching was applied to get to causal insights regarding effectiveness of a digital product for a major retailer.
  • Finally conclude with why understanding having a nuanced understanding of causality is all the more important in the big data era we are into.
 
13 favorite thumb_down thumb_up 4 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Outline/Structure of the Talk

The broad structure is as below:

  • The fundamental tenets of causality and measuring causal effects.
  • Challenges involved in measuring causal effects in real world situations.
  • Distinguishing between randomized and observational approaches to measuring the same.
  • Provide an introduction to measuring causal effects using observational data using matching and its extension of propensity score based matching with a focus on the a) the intuition and statistics behind it b) Tips from the trenches, basis the speakers experience in these techniques and c) Practical limitations of such approaches
  • Walk through an example of how matching was applied to get to causal insights regarding effectiveness of a digital product at Walmart.
  • Finally conclude with why having a nuanced understanding of causality is all the more important in the big data era we are into.

Learning Outcome

Learning outcome is outlined as below:

  • The fundamental nuances of causal inference.
  • Understand the differences between randomized and observational studies & the challenges in getting to causal conclusions for each.
  • Analytical frameworks (and implementation tools) to tease out causal effects in the wild- when randomization isn’t an option. I will focus on matching and its extensions as the analytic framework to tease out causal effects from observational data. There really are a variety of methods – will keep to this in the interest of time & relevance -matching involves angles that should be of interest for ML enthusiasts (for e.g. consideration of different distance measures, finding K nearest neighbors in an efficient manner, classification models & a careful grasp of finer statistical nuances). Broadly, I intend to give the audience a flavor of the below in terms on an analysis framework:
    • An overview of matching.
    • Rules to implementing matching directly on covariates/confounders.
    • Segue into greedy (or nearest neighbor based) matching.
    • Briefly, touch upon the concept of optimal matching.
    • Analysis to test if matching as a process has worked for creating a conducive scenario for culling causal insights.
    • Branch off and show how we can extend this to create propensity scores and propensity score based matching.
    • On the implementation tool point, I will be providing details around available packages (and what are the better options) in the open source software R that can help us conduct such a matching based analysis.

Target Audience

Any practitioner of data science - Data Scientists, Decision Scientists, Data analysts & Data Science-Managers.

Prerequisites for Attendees

A basic understanding of statistics and machine learning.

schedule Submitted 4 weeks ago

Public Feedback

comment Suggest improvements to the Speaker
  • Naresh Jain
    By Naresh Jain  ~  2 weeks ago
    reply Reply

    Subhasish, thank you for adding a proposal on a very important topic. This is a very commonly asked question.

    IMHO this topic is generic and does not specifically fit under ML/DL. I feel "Data Management", which also focuses on how to run experimentation at scale might be a better category to put this proposal under. If you agree, you can update your proposal and change the theme to Data Management.

    In the learning outcome, you've highlighted that the participant will learn about analytical frameworks (and implementation tools.) Can you elaborate on this? I see that you've mentioned propensity score-based matching. More details would help the program committee make an informed decision.

    • Subhasish Misra
      By Subhasish Misra  ~  2 weeks ago
      reply Reply

      Hi Naresh,

      Thank you for your comments.

      • On the first point –I think the the theme of learning best practices for effective data science management (covered under 'Data management') suites this talk. Have updated the theme accordingly.
      • On the second point- yes, I will focus on matching (there’s a variety of methods – will keep to this in the interest of time) and its extensions as the analytic framework to tease out causal effects from observational data. Broadly, I intend to give the audience a flavor of the below in terms on an analysis framework:
        • An overview of matching.
        • Rules to implementing matching directly on covariates/confounders.
        • Segue into greedy (or nearest neighbor based) matching.
        • Briefly, touch upon the concept of optimal matching.
        • Analysis to test if matching as a process has worked for creating a conducive scenario for culling causal insights.
        • Branch off and show how we can extend this to create propensity scores and propensity score based matching.

      On the implementation tool point, I will be providing details around available packages (and what are the better options) in the open source software R that can help us conduct such a matching based analysis.

      Btw, more details on propensity score matching can found in the 3rd link I have put in the proposal: https://www.quirks.com/articles/propensity-score-analysis-a-tool-for-mr

      Let me know for any further questions.

      Warm Regards,

      Subhasish

      • Naresh Jain
        By Naresh Jain  ~  1 week ago
        reply Reply

        Thank you. May I request you to please update the proposal with an analysis framework details you highlighted in the comment.

        • Subhasish Misra
          By Subhasish Misra  ~  6 days ago
          reply Reply

          You are welcome, Naresh. I have  updated the  analysis framework (have added more details too) in the section on 'learning outcomes'. Hope that works. Happy to take any further questions - let me know :)


  • Liked Dipanjan Sarkar
    keyboard_arrow_down

    Dipanjan Sarkar - Explainable Artificial Intelligence - Demystifying the Hype

    Dipanjan Sarkar
    Dipanjan Sarkar
    Data Scientist
    Red Hat
    schedule 3 months ago
    Sold Out!
    45 Mins
    Tutorial
    Intermediate

    The field of Artificial Intelligence powered by Machine Learning and Deep Learning has gone through some phenomenal changes over the last decade. Starting off as just a pure academic and research-oriented domain, we have seen widespread industry adoption across diverse domains including retail, technology, healthcare, science and many more. More than often, the standard toolbox of machine learning, statistical or deep learning models remain the same. New models do come into existence like Capsule Networks, but industry adoption of the same usually takes several years. Hence, in the industry, the main focus of data science or machine learning is more ‘applied’ rather than theoretical and effective application of these models on the right data to solve complex real-world problems is of paramount importance.

    A machine learning or deep learning model by itself consists of an algorithm which tries to learn latent patterns and relationships from data without hard-coding fixed rules. Hence, explaining how a model works to the business always poses its own set of challenges. There are some domains in the industry especially in the world of finance like insurance or banking where data scientists often end up having to use more traditional machine learning models (linear or tree-based). The reason being that model interpretability is very important for the business to explain each and every decision being taken by the model.However, this often leads to a sacrifice in performance. This is where complex models like ensembles and neural networks typically give us better and more accurate performance (since true relationships are rarely linear in nature).We, however, end up being unable to have proper interpretations for model decisions.

    To address and talk about these gaps, I will take a conceptual yet hands-on approach where we will explore some of these challenges in-depth about explainable artificial intelligence (XAI) and human interpretable machine learning and even showcase with some examples using state-of-the-art model interpretation frameworks in Python!

  • Liked Shrutika Poyrekar
    keyboard_arrow_down

    Shrutika Poyrekar / kiran karkera / Usha Rengaraju - Introduction to Bayesian Networks

    90 Mins
    Workshop
    Beginner

    Most machine learning models assume independent and identically distributed (i.i.d) data. Graphical models can capture almost arbitrarily rich dependency structures between variables. They encode conditional independence structure with graphs. Bayesian network, a type of graphical model describes a probability distribution among all variables by putting edges between the variable nodes, wherein edges represent the conditional probability factor in the factorized probability distribution. Thus Bayesian Networks provide a compact representation for dealing with uncertainty using an underlying graphical structure and the probability theory. These models have a variety of applications such as medical diagnosis, biomonitoring, image processing, turbo codes, information retrieval, document classification, gene regulatory networks, etc. amongst many others. These models are interpretable as they are able to capture the causal relationships between different features .They can work efficiently with small data and also deal with missing data which gives it more power than conventional machine learning and deep learning models.

    In this session, we will discuss concepts of conditional independence, d- separation , Hammersley Clifford theorem , Bayes theorem, Expectation Maximization and Variable Elimination. There will be a code walk through of simple case study.

  • Liked Johnu George
    keyboard_arrow_down

    Johnu George / Ramdoot Kumar P / Yuji Oshima - A Scalable Hyperparameter Optimization framework for ML workloads

    45 Mins
    Talk
    Intermediate

    In machine learning, hyperparameters are parameters that governs the training process itself. For example, learning rate, number of hidden layers, number of nodes per layer are typical hyperparameters for neural networks. Hyperparameter Tuning is the process of searching the best hyper parameters to initialize the learning algorithm, thus improving training performance.

    We present Katib, a scalable and general hyper parameter tuning framework based on Kubernetes which is ML framework agnostic (Tensorflow, Pytorch, MXNet, XGboost etc). You will learn about Katib in Kubeflow, an open source ML toolkit for Kubernetes, as we demonstrate the advantages of hyperparameter optimization by running a sample classification problem. In addition, as we dive into the implementation details, you will learn how to contribute as we expand this platform to include autoML tools.

  • Liked Juan Manuel Contreras
    keyboard_arrow_down

    Juan Manuel Contreras - Beyond Individual Contribution: How to Lead Data Science Teams

    45 Mins
    Talk
    Advanced

    Despite the increasing number of data scientists who are being asked to take on managerial and leadership roles as they grow in their careers, there are still few resources on how to manage data scientists and lead data science teams. There is also scant practical advice on how to serve as head of a data science practice: how to set a vision and craft a strategy for an organization to use data science.

    In this talk, I will describe my experience as a data science leader both at a political party (the Democratic Party of the United States of America) and at a fintech startup (Even.com), share lessons learned from these experiences and conversations with other data science leaders, and offer a framework for how new data science leaders can better transition to both managing data scientists and heading a data science practice.

  • Liked Shankar Somayajula
    keyboard_arrow_down

    Shankar Somayajula - Revisiting Market Basket Analysis (MBA) with the help of SQL Pattern Matching

    45 Mins
    Case Study
    Intermediate

    Market Basket Analysis or Affinity Analysis using Association Rules based model is a cross domain Solution Framework used for in Retail Analytics (Shopping Baskets), Clickstream/Web Traffic Analytics, Customer Behaviour Analytics, Fraud Analytics etc.

    Market Basket Analysis (MBA) is used to discover/identify patterns from transactional data (a master-detail transactional set of line items) and serves many down-stream Business processes like Recommendations, Merchandising/Inventory Planning, Product Assortments etc.

    MBA is extensively used in the industry. There are quite a few extensions possible to MBA like (a) Multi-Level Association Rules by allowing the core item/product hierarchy level to be flexible, (b) Multi-Dimensional Association Rules by including additional nuggets of information 'tags' along additional dimensions of interest, (c) Sequential Association Rules by considering the order of events within the transaction and eliciting signals relating to directionality of the Rule including possible causal indicators.

    MBA is typically performed as an offline batch/etl/analytic process with the results of the modeling extracted and saved for subsequent perusal by the Domain/Business Analyst.

    In this solution/revisiting of the MBA process, we decouple the Rule/Pattern identification/discovery phase (finding patterns/rules via Association Rules model build) from the Rule/Pattern KPI calculation phase related to the usefulness evaluation of the patterns (scoring patterns/rules via KPIs).

    MBA Rules/Patterns are typically evaluated via the Support, Confidence and Lift KPIs. Some experts have advocated for the definition of additional KPIs like Conviction, Imbalance Ratio (IR), Kulc factor (Kulczynski) to identify interesting Rule/Patterns. We define these KPIs as well as many custom KPIs which help qualify the Rule/Patterns and aid in Rule/Pattern Discovery/Exploration phase.

    The SQL approach to MBA allows us to

    => Include the pattern matching capability within an offline ETL workflow (match and pre-calculate results) or within a view (match on demand, dynamic calculation ) or a combination of both (both pre-calculated as well as on-demand) for regular BI Tools to leverage .

    => We can cover special/edge cases of interest in special domains like Fraud Patterns etc with insufficient coverage (very low support) but which need to be identified nevertheless. The pattern space can be very voluminous but in certain cases, we can identify/analyze user defined seeded patterns using SQL w/o having to build the MBA model.

    => We can also address Sequential Rules/Patterns where transaction order of items are considered during the matching process.

    => Another advantage is to allow the Domain Analyst/Business User to perform adhoc reporting via standard BI operations like slice and dice on the dataset and recalculating the Rule/Pattern KPIs.

    => Re-evaluate a Rule/Pattern against a different dataset from that it was identified (say, against a recent/streaming input data stream). See how Patterns discovered during the "Big Sale" period are doing in current Promotion/Campaign.

    => Establish Rule/Pattern Lifecycle beyond that of a MBA 'model' -- Establish a Rules curation process to determine how a discovered Rule/Pattern can be designated as an 'Insight' for further use in related (downstream) systems.

  • 20 Mins
    Demonstration
    Advanced

    In this digital era when the attention span of customers is reducing drastically, for a marketer it is imperative to understand the following 4 aspects more popularly known as "The 4R's of Marketing" if they want to increase our ROI:

    - Right Person

    - Right Time

    - Right Content

    - Right Channel

    Only when we design and send our campaigns in such a way, that it reaches the right customers at the right time through the right channel telling them about stuffs they like or are interested in ... can we expect higher conversions with lower investment. This is a problem that most of the organizations need to solve for to stay relevant in this age of high market competition.

    Among all these we will put special focus on appropriate content generation based on targeted user base using Markov based models and do a quick hack session.

    The time breakup can be:

    5 mins : Difference between Martech and traditional marketing. The 4R's of marketing and why solving for them is crucial

    5 mins : What is Smart Segments and how to solve for it, with a short demo

    5 mins : How marketers use output from Smart Segments to execute targeted campaigns

    5 mins: What is STO, how it can be solved and what is the performance uplift seen by clients when they use it

    5 mins: What is Channel Optimization, how it can be solved and what is the performance uplift seen by clients when they use it

    5 mins: Why sending the right message to customers is crucial, and introduction to appropriate content creation

    15 mins: Covering different Text generation nuances, and a live demo with walk through of a toy code implementation

  • Liked Venkata Pingali
    keyboard_arrow_down

    Venkata Pingali - Accelerating ML using Production Feature Engineering Platform

    Venkata Pingali
    Venkata Pingali
    Co-Founder & CEO
    Scribble Data
    schedule 1 month ago
    Sold Out!
    45 Mins
    Talk
    Intermediate

    Anecdotally only 2% of the models developed are productionized, i.e., used day to day to improve business outcomes. Part of the reason is the high cost and complexity of productionization of models. It is estimated to be anywhere from 40 to 80% of the overall work.

    In this talk, we will share Scribble Data’s insights into productionization of ML, and how to reduce the cost and complexity in organizations. It is based on the last two years of work at Scribble developing and deploying production ML Feature Engineering Platform, and study of platforms from major organizations such as Uber. This talk expands on a previous talk given in January.

    First, we discuss the complexity of production ML systems, and where time and effort goes. Second, we give an overview of feature engineering, which is an expensive ML task, and the associated challenges Third, we suggest an architecture for Production Feature Engineering platform. Last, we discuss how one could go about building one for your organization

  • Liked Jitendra Rudravaram
    keyboard_arrow_down

    Jitendra Rudravaram / aswin narayanan - Bayesian Modeling with PYMC3

    20 Mins
    Talk
    Beginner

    Bayesian Modeling with PYMC3 to predict Dividends ; A classic small data problem.

  • Liked Kshitij Srivastava
    keyboard_arrow_down

    Kshitij Srivastava / Manikant Prasad - Data Science in Containers

    45 Mins
    Case Study
    Beginner

    Containers are all the rage in the DevOps arena.

    This session is a live demonstration of how the data team at Milliman uses containers at each step in their data science workflow -

    1) How do containerized environments speed up data scientists at the data exploration stage

    2) How do containers enable rapid prototyping and validation at the modeling stage

    3) How do we put containerized models on production

    4) How do containers make it easy for data scientists to do DevOps

    5) How do containers make it easy for data scientists to host a data science dashboard with continuous integration and continuous delivery

  • Liked Ishita Mathur
    keyboard_arrow_down

    Ishita Mathur - How GO-FOOD built a Query Semantics Engine to help you find the food you want to order

    Ishita Mathur
    Ishita Mathur
    Data Scientist
    GO-JEK Tech
    schedule 2 weeks ago
    Sold Out!
    45 Mins
    Case Study
    Beginner

    Context: The Search problem

    GOJEK is a SuperApp: 19+ apps within an umbrella app. One of these is GO-FOOD, the first food delivery service in Indonesia and the largest food delivery service in Southeast Asia. There are over 300 thousand restaurants on the platform with a total of over 16 million dishes between them.

    Over two-thirds of those who order food online using GO-FOOD do so by utilising text search. Search engines are so essential to our everyday digital experience that we don’t think twice when using them anymore. Search engines involve two primary tasks: retrieval of documents and ranking them in order of relevance. While improving that ranking is an extremely important part of improving the search experience, actually understanding that query helps give the searcher exactly what they’re looking for. This talk will show you what we are doing to make it easy for users to find what they want.

    GO-FOOD uses the ElasticSearch stack with restaurant and dish indexes to search for what the user types. However, this results in only exact text matches and at most, fuzzy matches. We wanted to create a holistic search experience that not only personalised search results, but also retrieved restaurants and dishes that were more relevant to what the user was looking for. This is being done by not only taking advantage of ElasticSearch features, but also developing a Query semantics engine.

    Query Understanding: What & Why

    This is where Query Understanding comes into the picture: it’s about using NLP to correctly identify the search intent behind the query and return more relevant search results, it’s about the interpretation process even before the results are even retrieved and ranked. The semantic neighbours of the query itself become the focus of the search process: after all, if I don’t understand what you’re trying to ask for, how will I give you what you want?

    In the duration of this talk, you will learn about how we are taking advantage of word embeddings to build a Query Understanding Engine that is holistically designed to make the customer’s experience as smooth as possible. I will go over the techniques we used to build each component of the engine, the data and algorithmic challenges we faced and how we solved each problem we came across.

  • Liked AbdulMajedRaja
    keyboard_arrow_down

    AbdulMajedRaja / Parul pandey - Become Language Agnostic by Combining the Power of R with Python using Reticulate

    45 Mins
    Tutorial
    Intermediate

    Language Wars have always been there for ages and it's got a new candidate with Data science booming - R vs Python. While the fans are fighting R vs Python, the creators (Hadley Wickham (Chief DS @ RStudio) and Wes McKinney (Creator of Pandas Project)) are working together as Ursa Labs team to create open source data science tools. A similar effort by RStudio has given birth to Reticulate (R Interface to Python) that helps programmers combine R and Python in the same code, session and project and create a new kind of super hero.

  • Liked Ashay Tamhane
    keyboard_arrow_down

    Ashay Tamhane - Modeling Contextual Changes In User Behaviour In Fashion e-commerce

    Ashay Tamhane
    Ashay Tamhane
    Staff Data Scientist
    Swiggy
    schedule 2 weeks ago
    Sold Out!
    20 Mins
    Talk
    Intermediate

    Impulse purchases are quite frequent in fashion e-commerce; browse patterns indicate fluid context changes across diverse product types probably due to the lack of a well-defined need at the consumer’s end. Data from fashion e-commerce portal indicate that the final product a person ends-up purchasing is often very different from the initial product he/she started the session with. We refer to this characteristic as a ‘context change’. This feature of fashion e-commerce makes understanding and predicting user behaviour quite challenging. Our work attempts to model this characteristic so as to both detect and preempt context changes. Our approach employs a deep Gated Recurrent Unit (GRU) over clickstream data. We show that this model captures context changes better than other non-sequential baseline models.

  • Liked Dr. Neha Sehgal
    keyboard_arrow_down

    Dr. Neha Sehgal - Open Data Science for Smart Manufacturing

    45 Mins
    Talk
    Intermediate

    Open Data offers a tremendous opportunity in transformation of today’s manufacturing sector to smarter manufacturing. Smart Manufacturing initiatives include digitalising production processes and integrating IoT technologies for connecting machines to collect data for analysis and visualisation.

    In this talk, an understanding of linkage between various industries within manufacturing sector through lens of Open Data Science will be illustrated. The data on manufacturing sector companies, company profiles, officers and financials will be scraped from UK Open Data API’s. The work I plan to showcase in ODSC is part of UK Made Smarter Project, where the work has been useful for major aerospace alliances to find out the champions and strugglers (SMEs) within manufacturing sector based on the open data gathered from multiple sources. The talk includes discussion on data extraction, data cleaning, data transformation - transforming raw financial information about companies to key metrics of interest - and further data analytics to create clusters of manufacturing companies into "Champions" and "Strugglers". The talk showcased examples of powerful R Shiny based dashboards of interest for suppliers, manufacturer and other key stakeholders in supply chain network.

    Further analysis includes network analysis for industries, clustering and deploying the model as an API using Google Cloud Platform. The presenter will discuss about the necessity of 'Analytical Thinking' approach as an aid to handle complex big data projects and how to overcome challenges while working with real-life data science projects.

  • Liked Krishna Sangeeth
    keyboard_arrow_down

    Krishna Sangeeth - The last mile problem in ML

    Krishna Sangeeth
    Krishna Sangeeth
    Data Scientist
    Ericsson
    schedule 1 week ago
    Sold Out!
    45 Mins
    Talk
    Intermediate

    “We have built a machine learning model, What next?”

    There is quite a bit of journey that one needs to cover from building a model in Jupyter notebook to taking it to production.
    I would like to call it as the “last mile problem in ML” , this last mile could be a simple tread if we embrace some good ideas.

    This talk covers some of these opinionated ideas on how we can get around some of the pitfalls in deployment of ML models in production.

    We would go over the below questions in detail think about solutions for them.

    • How to fix the zombie models apocalypse, a state when nobody knows how the model was trained ?
    • In Science, experiments are found to be valid only if they are reproducible. Should this be the case in Datascience as well ?
    • Training the model in your local machine and waiting for an eternity to complete is no fun. What are some better ways of doing this ?
    • How do you package your machine learning code in a robust manner?
    • Does an ML project have the luxury of not following good Software Engineering principles?
  • Liked Akash Tandon
    keyboard_arrow_down

    Akash Tandon - Traversing the graph computing and database ecosystem

    Akash Tandon
    Akash Tandon
    Data Engineer
    SocialCops
    schedule 1 week ago
    Sold Out!
    45 Mins
    Talk
    Intermediate

    Graphs have long held a special place in computer science’s history (and codebases). We're seeing the advent of a new wave of the information age; an age that is characterized by great emphasis on linked data. Hence, graph computing and databases have risen to prominence rapidly over the last few years. Be it enterprise knowledge graphs, fraud detection or graph-based social media analytics, there are a great number of potential applications.

    To reap the benefits of graph databases and computing, one needs to understand the basics as well as current technical landscape and offerings. Equally important is to understand if a graph-based approach suits your problem.
    These realizations are a result of my involvement in an effort to build an enterprise knowledge graph platform. I also believe that graph computing is more than a niche technology and has potential for organizations of varying scale.
    Now, I want to share my learning with you.

    This talk will touch upon the above points with the general premise being that data structured as graph(s) can lead to improved data workflows.
    During our journey, you will learn fundamentals of graph technology and witness a live demo using Neo4j, a popular property graph database. We will walk through a day in the life of data workers (engineers, scientists, analysts), the challenges that they face and how graph-based approaches result in elegant solutions.
    We'll end our journey with a peek into the current graph ecosystem and high-level concepts that need to be kept in mind while adopting an offering.

  • Liked Pallavi Mudumby
    keyboard_arrow_down

    Pallavi Mudumby - B2B Recommender System using Semantic knowledge - Ontology

    45 Mins
    Case Study
    Intermediate

    In this era of big data , Recommender systems are becoming increasingly important for businesses because they can help companies offer personalized product recommendations to customers. There have been many acknowledged recognized successes of consumer-oriented recommender systems, particularly in e-commerce. However, when it comes to Business to-Business (B2B) market space, there has been less research and real-time application of such systems.

    In our case study, we present a hybrid approach of building a context-sensitive recommender system incorporating semantic knowledge in the form of domain ontology and a custom user- user collaborative filtering model in a B2B space. Using Engineering Products transaction data of an Instrumentation company, we demonstrate that this recommendation algorithm offers improved personalization, diversity and cold start performance compared to standard Collaborative Filtering based recommender system.

  • Liked Pushker Ravindra
    keyboard_arrow_down

    Pushker Ravindra - Data Science Best Practices for R and Python

    20 Mins
    Talk
    Intermediate

    How many times did you feel that you were not able to understand someone else’s code or sometimes not even your own? It’s mostly because of bad/no documentation and not following the best practices. Here I will be demonstrating some of the best practices in Data Science, for R and Python, the two most important programming languages in the world for Data Science, which would help in building sustainable data products.

    - Integrated Development Environment (RStudio, PyCharm)

    - Coding best practices (Google’s R Style Guide and Hadley’s Style Guide, PEP 8)

    - Linter (lintR, Pylint)

    - Documentation – Code (Roxygen2, reStructuredText), README/Instruction Manual (RMarkdown, Jupyter Notebook)

    - Unit testing (testthat, unittest)

    - Packaging

    - Version control (Git)

    These best practices reduce technical debt in long term significantly, foster more collaboration and promote building of more sustainable data products in any organization.

  • Liked Siboli mukherjee
    keyboard_arrow_down

    Siboli mukherjee - Real time Anomaly Detection in Network KPI using Time Series

    20 Mins
    Experience Report
    Intermediate

    Abstract:

    How to accurately detect Key Performance Indicator (KPI) anomalies is a critical issue in cellular network management. In this talk I shall introduce CNR(Cellular Network Regression) a unified performance anomaly detection framework for KPI time-series data. CNR realizes simple statistical modelling and machine-learning-based regression for anomaly detection; in particular, it specifically takes into account seasonality and trend components as well as supports automated prediction model retraining based on prior detection results. I demonstrate here how CNR detects two types of anomalies of practical interest, namely sudden drops and correlation changes, based on a large-scale real-world KPI dataset collected from a metropolitan LTE network. I explore various prediction algorithms and feature selection strategies, and provide insights into how regression analysis can make automated and accurate KPI anomaly detection viable.

    Index Terms—anomaly detection, NPAR (Network Performance Analysis)

    1. INTRODUCTION

    The continuing advances of cellular network technologies make high-speed mobile Internet access a norm. However, cellular networks are large and complex by nature, and hence production cellular networks often suffer from performance degradations or failures due to various reasons, such as back- ground interference, power outages, malfunctions of network elements, and cable disconnections. It is thus critical for network administrators to detect and respond to performance anomalies of cellular networks in real time, so as to maintain network dependability and improve subscriber service quality. To pinpoint performance issues in cellular networks, a common practice adopted by network administrators is to monitor a diverse set of Key Performance Indicators (KPIs), which provide time-series data measurements that quantify specific performance aspects of network elements and resource usage. The main task of network administrators is to identify any KPI anomalies, which refer to unexpected patterns that occur at a single time instant or over a prolonged time period.

    Today’s network diagnosis still mostly relies on domain experts to manually configure anomaly detection rules such a practice is error-prone, labour intensive, and inflexible. Recent studies propose to use (supervised) machine learning for anomaly detection in cellular networks . ellular networks, a common practice adopted by network administrators is to monitor a diverse set of Key Performance Indicators (KPIs), which provide time-series data measurements that quantify specific performance aspects of network elements and resource usage. The main task of network administrators is to identify any KPI anomalies, which refer to unexpected patterns that occur at a single time instant or over a prolonged time period.

    Today’s network diagnosis still mostly relies on domain experts to manually configure anomaly detection rules such a practice is error-prone, labour intensive, and inflexible. Recent studies propose to use (supervised) machine learning for anomaly detection in cellular networks .

  • Liked Sujoy Roychowdhury
    keyboard_arrow_down

    Sujoy Roychowdhury - Building Multimodal Deep learning recommendation Systems

    45 Mins
    Talk
    Intermediate

    Recommendation systems aid in consumer decision making processes
    like what to buy, which books to read or movies to watch.
    Recommendation systems are specially useful in e-commerce websites
    where a user has to navigate through several hundred items
    in order to get to what they’re looking for . The data on how users
    interact with the systems can be used to analyze user behaviour and
    make recommendations that are in line with users’ preferences of
    certain item attributes over others. Collaborative filtering has, until
    recently, been able to achieve personalization through user based
    and item based collaborative filtering techniques. Recent advances
    in the application of Deep Learning in research as well as industry
    has led people to apply these techniques in recommendation systems.
    Many recommendation systems use product features for recommendations.
    However textual features available on products are
    almost invariably incomplete in real-world datasets due to various
    process related issues. Additionally, product features even when
    available cannot describe completely a certain feature. These limit
    the success of such recommendation techniques. Deep learning
    systems can process multi-modal data like text, images, audio and
    thus is our choice in implementing multi-modal recommendation
    system.
    In this talk we show a real-world application of a fashion recommendation
    system. This is based on a multi-modal deep learning system which is able to address the problem of poor annotation in the product data. We evaluate different deep learning architectures
    to process multi-modal data and compare their effectiveness. We
    highlight the trade-offs seen in a real-world implementation and
    how these trade-offs affect the actual choice of the architecture.

  • Liked Maulik Soneji
    keyboard_arrow_down

    Maulik Soneji / Jewel James - Using ML for Personalizing Food Search

    45 Mins
    Talk
    Beginner

    GoFood, the food delivery product of Gojek is one of the largest of its kind in the world. This talk summarizes the approaches considered and lessons learnt during the design and successful experimentation of a search system that uses ML to personalize the restaurant results based on the user’s food and taste preferences .

    We formulated the estimation of the relevance as a Learning To Rank ML problem which makes the task of performing the ML inference for a very large number of customer-merchant pairs the next hurdle.
    The talk will cover our learnings and findings for the following:
    a. Creating a Learning Model for Food Search
    b. Targetting experiments to a certain percentage of users
    c. Training the model from real time data
    d. Enriching Restaurant data with custom tags

    Our story should help the audience in making design decisions on the data pipelines and software architecture needed when using ML for relevance ranking in high throughput search systems.