Pre-Conf Workshop

Wed, Aug 7
09:30

    Registration - 30 mins

10:00
  • Added to My Schedule
    keyboard_arrow_down
    Dipanjan Sarkar

    Dipanjan Sarkar / Anuj Gupta - Natural Language Processing Bootcamp - Zero to Hero

    schedule  10:00 AM - 06:00 PM place Jupiter 1 people 49 Interested shopping_cart Sold Out! add_circle_outline Waitlist

    Data is the new oil and unstructured data, especially text, images and videos contain a wealth of information. However, due to the inherent complexity in processing and analyzing this data, people often refrain from spending extra time and effort in venturing out from structured datasets to analyze these unstructured sources of data, which can be a potential gold mine. Natural Language Processing (NLP) is all about leveraging tools, techniques and algorithms to process and understand natural language based unstructured data - text, speech and so on.

    Being specialized in domains like computer vision and natural language processing is no longer a luxury but a necessity which is expected of any data scientist in today’s fast-paced world! With a hands-on and interactive approach, we will understand essential concepts in NLP along with extensive case- studies and hands-on examples to master state-of-the-art tools, techniques and frameworks for actually applying NLP to solve real- world problems. We leverage Python 3 and the latest and best state-of- the-art frameworks including NLTK, Gensim, SpaCy, Scikit-Learn, TextBlob, Keras and TensorFlow to showcase our examples. You will be able to learn a fair bit of machine learning as well as deep learning in the context of NLP during this bootcamp.

    In our journey in this field, we have struggled with various problems, faced many challenges, and learned various lessons over time. This workshop is our way of giving back a major chunk of the knowledge we’ve gained in the world of text analytics and natural language processing, where building a fancy word cloud from a bunch of text documents is not enough anymore. You might have had questions like ‘What is the right technique to solve a problem?’, ‘How does text summarization really work?’ and ‘Which are the best frameworks to solve multi-class text categorization?’ among many other questions! Based on our prior knowledge and learnings from publishing a couple of books in this domain, this workshop should help readers avoid some of the pressing issues in NLP and learn effective strategies to master NLP.

    The intent of this workshop is to make you a hero in NLP so that you can start applying NLP to solve real-world problems. We start from zero and follow a comprehensive and structured approach to make you learn all the essentials in NLP. We will be covering the following aspects during the course of this workshop with hands-on examples and projects!

    • Basics of Natural Language and Python for NLP tasks
    • Text Processing and Wrangling
    • Text Understanding - POS, NER, Parsing
    • Text Representation - BOW, Embeddings, Contextual Embeddings
    • Text Similarity and Content Recommenders
    • Text Clustering
    • Topic Modeling
    • Text Summarization
    • Sentiment Analysis - Unsupervised & Supervised
    • Text Classification with Machine Learning and Deep Learning
    • Multi-class & Multi-Label Text Classification
    • Deep Transfer Learning and it's promise
    • Applying Deep Transfer Learning - Universal Sentence Encoders, ELMo and BERT for NLP tasks
    • Generative Deep Learning for NLP
    • Next Steps

    With over 10 hands-on projects, the bootcamp will be packed with plenty of hands-on examples for you to go through, try out and practice and we will try to keep theory to a minimum considering the limited time we have and the amount of ground we want to cover. We hope at the end of this workshop you can takeaway some useful methodologies to apply for solving NLP problems in the future. We will be using Python to showcase all our examples.

  • Added to My Schedule
    keyboard_arrow_down
    Viral B. Shah

    Viral B. Shah / Abhijith Chandraprabhu - Computational Machine Learning

    schedule  10:00 AM - 06:00 PM place Jupiter 2 people 23 Interested

    You have been hearing about machine learning (ML) and artificial intelligence (AI) everywhere. You have heard about computers recognizing images, generating speech, natural language, and beating humans at Chess and Go.

    The objectives of the workshop:

    1. Learn machine learning, deep learning and AI concepts

    2. Provide hands-on training so that students can write applications in AI

    3. Provide ability to run real machine learning production examples

    4. Understand programming techniques that underlie the production software

    The concepts will be taught in Julia, a modern language for numerical computing and machine learning - but they can be applied in any language the audience are familiar with.

    Workshop will be structured as “reverse classroom” based laboratory exercises that have proven to be engaging and effective learning devices. Knowledgeable facilitators will help students learn the material and extrapolate to custom real world situations.

  • Added to My Schedule
    keyboard_arrow_down
    Dr. Sarabjot Singh Anand

    Dr. Sarabjot Singh Anand - The Art and Science of building Recommender Systems

    schedule  10:00 AM - 06:00 PM place Neptune people 6 Interested

    In this workshop, we will understand the algorithms behind recommender systems in different domains and gain an appreciation for how the domain impacts the approach used. Attendees will be creating recommenders using user item matrices, news and graphs gaining an understanding of collaborative and content-based filtering, text representation, matrix factorization, and random walks.

  • Added to My Schedule
    keyboard_arrow_down

    Data Science Kick Starter Overview

    schedule  10:00 - 10:15 AM place Mars people 10 Interested
10:15
  • Added to My Schedule
    keyboard_arrow_down
    Dr. Om Deshmukh

    Dr. Om Deshmukh - Key Principles to Succeed in Data Science

    schedule  10:15 - 11:45 AM place Mars people 6 Interested

    Building a successful career in the field of data science needs a lot more than just a thorough understanding of the various machine learning models. One has to also undergo a paradigm shift with regards to how s/he would typically approach any technical problems. In particular, patterns and insights unearthed from the data analysis have to be the guiding North Star for the next best action rather than the path of action implied by the data scientist's or his/her superior's intuition alone. One of the things that makes this shift tricker, in reality, is the 'confirmation bias': Confirmation bias is defined as a cognitive bias to interpret information in such a way that it further’s our pre-existing notions.

    In this session, we will discuss how the seemingly disjoint components of the digital ecosystem are working in tandem to make data-driven decisioning central to every functional aspect of every business vertical. This centrality accorded to the data makes it imperative that

    • (a) the data integrity is maintained across the lifetime of the data,
    • (b) the insights generated from the data are interpreted in the holistic context of the sources of the data and the data processing techniques, and
    • (c) human experts are systematically given an opportunity to overwrite any purely-data-driven-decisions, especially when such decisions may have far-reaching consequences.

    We will discuss these aspects using three case studies from three different business verticals (financial sector, logistics sector and the third one selected by popular vote). For each of these three case studies, the "traditional" way of solving the problem will be contrasted with the data-driven approach of solving. The participants will be split into three groups and each group will be asked to present the best data-driven approaches to solve one of the case studies. The other two groups can critique the presentation/approach. The winning group will be picked based on the presentation and the proposed approach.

    At the end of the session, the attendees should be able to work through any new case study to

    • (a) translate a business problem into an appropriate data-driven problem,
    • (b) formulate strategies to capture and access relevant data,
    • (c) shortlist relevant data modelling techniques to unearth the hidden patterns, and
    • (d) tie back the value of the findings to the business problem.
11:45
  • Added to My Schedule
    keyboard_arrow_down
    Kavita Dwivedi

    Kavita Dwivedi / Nirav Shah - Building a Scorecard using Python

    schedule  11:45 AM - 01:15 PM place Mars people 17 Interested

    Financial Scorecards are used widely in all financial organizations for different kinds of ratings. This workshop will take you through the building and validation process of a financial scorecard using data.Financial Scorecards are used by banking organizations to judge the financial stability of their portfolio and take business decisions. These scorecards help in tracking and collections.

    This workshop is designed for audience to take them through the process of developing a scorecard using Python. The workshop will guide you through the EDA process using Python and will demonstrate the different kind of visualizations that can enable better data understanding. We would cover basics of EDA and how python visualizations can support us in data mining. We aim to cover step by step process of building a scorecard and Use of different Machine Learning algorithms to build a better scorecard by comparing the outputs of different algorithms. We will demonstrate 3 different Machine learning algorithms Random Forest , Support Vector Machine and Gradient Boosting and their outcomes while building this scorecard.

    Along the workshop we would introduce you to Python libraries that can be used to build these scorecards with more efficacy.

    The key python libraries that we will be using will be Pandas , Numpy ,Scipy , Matplotlib and seaborn. We would demonstrate functions of these libraries used in building scorecards.

    This will be a hands on session and attendees can come with their laptops for better understanding and follow up of session.

01:15
  • Added to My Schedule
    keyboard_arrow_down

    Food

    schedule  01:15 - 02:00 PM place Mars people 12 Interested
02:00
  • Added to My Schedule
    keyboard_arrow_down
    Ramanathan R

    Ramanathan R / Gurram Poorna Prudhvi - Time Series analysis in Python

    schedule  02:00 - 06:00 PM place Mars people 21 Interested

    “Time is precious so is Time Series Analysis”

    Time series analysis has been around for centuries helping us to solve from astronomical problems to business problems and advanced scientific research around us now. Time stores precious information, which most machine learning algorithms don’t deal with. But time series analysis, which is a mix of machine learning and statistics helps us to get useful insights. Time series can be applied to various fields like economy forecasting, budgetary analysis, sales forecasting, census analysis and much more. In this workshop, We will look at how to dive deep into time series data and make use of deep learning to make accurate predictions.

    Structure of the workshop goes like this

    • Introduction to Time series analysis
    • Time Series Exploratory Data Analysis and Data manipulation with pandas
    • Forecast Time series data with some classical method (AR, MA, ARMA, ARIMA, GARCH, E-GARCH)
    • Introduction to Deep Learning and Time series forecasting using MLP, RNN, LSTM
    • Financial Time Series data
    • Boosting Techniques

    Libraries Used:

    • Keras (with Tensorflow backend)
    • matplotlib
    • pandas
    • statsmodels
    • prophet
    • pyflux

ODSC India Day 1

Thu, Aug 8
08:30

    Registration - 30 mins

09:00
  • schedule  09:00 - 09:45 AM place Grand Ball Room people 30 Interested

    Since we originally proposed the need for a first-class language, compiler and ecosystem for machine learning (ML) - a view that is increasingly shared by many, there have been plenty of interesting developments in the field. Not only have the tradeoffs in existing systems, such as TensorFlow and PyTorch, not been resolved, but they are clearer than ever now that both frameworks contain distinct "static graph" and "eager execution" interfaces. Meanwhile, the idea of ML models fundamentally being differentiable algorithms – often called differentiable programming – has caught on.

    Where current frameworks fall short, several exciting new projects have sprung up that dispense with graphs entirely, to bring differentiable programming to the mainstream. Myia, by the Theano team, differentiates and compiles a subset of Python to high-performance GPU code. Swift for TensorFlow extends Swift so that compatible functions can be compiled to TensorFlow graphs. And finally, the Flux ecosystem is extending Julia’s compiler with a number of ML-focused tools, including first-class gradients, just-in-time CUDA kernel compilation, automatic batching and support for new hardware such as TPUs.

    This talk will demonstrate how Julia is increasingly becoming a natural language for machine learning, the kind of libraries and applications the Julia community is building, the contributions from India (there are many!), and our plans going forward.

10:00

    Welcome Address & Conference Overview - 30 mins

10:30

    Coffee/Tea Break - 30 mins

11:00
  • Added to My Schedule
    keyboard_arrow_down
    Jared Lander

    Jared Lander - Making Sense of AI, ML and Data Science

    schedule  11:00 - 11:45 AM place Grand Ball Room 1 people 10 Interested

    When I was in grad school it was called statistics. A few years later I told people I did machine learning and after seeing the confused look on their face I changed that to data science which excited them. More years passed, and without changing anything I do, I now practice AI, which seems scary to some people and somehow involves ML. During this talk we will demystify buzzwords, technical terms and overarching ideas. We'll touch upon key concepts and see a little bit of code in action to get a sense of what is happening in ML, AI or whatever else we want to call the field.

  • Added to My Schedule
    keyboard_arrow_down
    Juan Manuel Contreras

    Juan Manuel Contreras - Beyond Individual Contribution: How to Lead Data Science Teams

    schedule  11:00 - 11:45 AM place Grand Ball Room 2 people 16 Interested

    Despite the increasing number of data scientists who are being asked to take on managerial and leadership roles as they grow in their careers, there are still few resources on how to manage data scientists and lead data science teams. There is also scant practical advice on how to serve as head of a data science practice: how to set a vision and craft a strategy for an organization to use data science.

    In this talk, I will describe my experience as a data science leader both at a political party (the Democratic Party of the United States of America) and at a fintech startup (Even.com), share lessons learned from these experiences and conversations with other data science leaders, and offer a framework for how new data science leaders can better transition to both managing data scientists and heading a data science practice.

  • Added to My Schedule
    keyboard_arrow_down
    Anne Ogborn

    Anne Ogborn - Symbolic AI in a Machine Learning Age

    schedule  11:00 - 11:45 AM place Jupiter people 4 Interested

    Before machine learning took over, AI was done symbolically.

    Symbolic methods still have value, and merging of symbolic and statistical methods is an emerging research area.

    In particular, symbolic methods often have much greater explanatory power. Fusing symbolic methods with ML often creates a more explicable system.

    In this talk we will explore some areas of active work on hybrid applications of symbolic and machine learning.

  • Added to My Schedule
    keyboard_arrow_down
    Amit Doshi

    Amit Doshi - Integrating Digital Twin and AI for Smarter Engineering Decisions

    schedule  11:00 - 11:45 AM place Neptune people 2 Interested

    With the increasing popularity of AI, new frontiers are emerging in predictive maintenance and manufacturing decision science. However, there are many complexities associated with modeling plant assets, training predictive models for them, and deploying these models at scale for near real-time decision support. This talk will discuss these complexities in the context of building an example system.

    First, you must have failure data to train a good model, but equipment failures can be expensive to introduce for the sake of building a data set! Instead, physical simulations can be used to create large, synthetic data sets to train a model with a variety of failure conditions.

    These systems also involve high-frequency data from many sensors, reporting at different times. The data must be time-aligned to apply calculations, which makes it difficult to design a streaming architecture. These challenges can be addressed through a stream processing framework that incorporates time-windowing and manages out-of-order data with Apache Kafka. The sensor data must then be synchronized for further signal processing before being passed to a machine learning model.

    As these architectures and software stacks mature in areas like manufacturing, it is increasingly important to enable engineers and domain experts in this workflow to build and deploy the machine learning models and work with system architects on the system integration. This talk also highlights the benefit of using apps and exposing the functionality through API layers to help make these systems more accessible and extensible across the workflow.

    This session will focus on building a system to address these challenges using MATLAB, Simulink. We will start with a physical model of an engineering asset and walk through the process of developing and deploying a machine learning model for that asset as a scalable and reliable cloud service.

12:00
  • Added to My Schedule
    keyboard_arrow_down
    Dr. Satnam Singh

    Dr. Satnam Singh - AI for CyberSecurity

    schedule  12:00 - 12:45 PM place Grand Ball Room 1 people 5 Interested

    In the last few years, when the cybercrooks have speeded their execution plan on making quick money by ransomware attacks. All enterprises, including banks, government offices, police stations, big and small businesses, have witnessed WannaCry, Petya, NotPetya ransomware attacks. The question for us is what we can do to defend from cyber threats? The cybersecurity industry is pitching heavily to leverage AI to combat cyber threats. Almost every cybersecurity vendor is claiming to have AI in its product. This makes it difficult for end-user enterprises to choose the product, and they need to evaluate the AI capabilities of multiple vendors. In this talk, I will cut the hype and discuss the reality of what AI can do for cybersecurity? I will share use cases, data pipeline, architecture, algorithms that are proven for information security along with the challenges in deploying them in the wild. The audience will be able to learn how to combine AI with domain knowledge to make an enterprise AI solution.

  • Added to My Schedule
    keyboard_arrow_down
    Dr. Ajay Chander

    Dr. Ajay Chander / Dr. Ramya Srinivasan - Detecting Bias in AI: A Systems View & A Technique for Datasets

    schedule  12:00 - 12:45 PM place Grand Ball Room 2 people 5 Interested

    Modern machine learning (ML) offers a new way of creating software to solve problems, focused on learning structures, learning algorithms, and data. In all steps of this process, from the specification of the problem, to the datasets chosen as relevant to the solution, to the choice of learning structures and algorithms, a variety of biases can creep in and compound each other. In this talk, we present a systems view of detecting Bias in AI/ML systems as analogous to the software testing problem. To start, a variety of expectations from an AI/ML system can be specified given its intended goals and deployment. Different kinds of bias can then be mapped to different failure modes, which can then be tested for during a variety of techniques. We will also describe a new technique based on Topological Data Analysis to detect bias in source datasets. This technique utilizes a persistence homology based visualization and is lightweight: the human-in-the-loop does not need to select metrics or tune parameters, and carry out this step before choosing a model. We’ll describe experiments on the German credit dataset using this technique to demonstrate its effectiveness.

  • Added to My Schedule
    keyboard_arrow_down
    Yash Deo

    Yash Deo - Big Data to Big Intelligence - Using AI to Generate Actionable Insights from Open Source Data

    schedule  12:00 - 12:45 PM place Jupiter people 21 Interested

    As a data scientist i have been lucky enough to be a part of highly critical and cutting edge solutions for pristine organizations like Intel , Indian Army etc. While each of them was an amazing experience in its own right , the challenges i faced and the knowledge i gained from making an Open Source Intelligence gathering and Analytics/Prediction tool for the Indian Army are unmatched . This experience showed me how powerful Open source data can be if it is used correctly .

    An OSINT tool can have some powerful capabalities like :-

    • Predict and estimate the location of an Twitter/Facebook user (who has disabled his location obviously!) through various metrics.
    • Predict occurrence of certain events (eg. Riot's) based of information gathered from various Open Sources.
    • Identify and Predict accounts of people who may be potential suspects (Security use Case) or potential Influencers (Commercial use case).
    • Contextual analysis of words to derive relevant insights.

    Open Source Data is however very challenging to work with for a vast array of reasons . This is the issue i aim to tackle with this talk , I will be going over 3 exciting projects which have been made using open source data through which i shall demonstrate various techniques to find / modify / model / and use machine learning techniques on the data. While going over the projects i shall also try to draw parallels so as to how you can use similar techniques in your own endeavours.

  • Added to My Schedule
    keyboard_arrow_down
    Deepak Mukunthu

    Deepak Mukunthu - Automated Machine Learning

    schedule  12:00 - 12:45 PM place Neptune people 6 Interested

    Intelligent experiences powered by AI can seem like magic to users. Developing them, however, is pretty cumbersome involving a series of sequential and interconnected decisions along the way that is pretty time-consuming. What if there was an automated service that identifies the best machine learning pipelines for a given problem/data? Automated Machine Learning does exactly that!

    With the goal of accelerating AI for data scientists by improving their productivity and democratizing AI for other data personas who want to get into machine learning, Automated ML comes in many different flavors and experiences. Automated ML is one of the top 5 AI trends this year. This session will cover concepts of Automated ML, how it works, different variations of it and how you can use it for your scenarios.

12:45

    Lunch - 60 mins

01:45
  • Added to My Schedule
    keyboard_arrow_down
    Vijay Gabale

    Vijay Gabale - Data Distribution Search: Deep Reinforcement Learning To Improvise Input Datasets

    schedule  01:45 - 02:30 PM place Grand Ball Room 1 people 12 Interested

    Beyond computer games and neural architecture search; practical applications of Deep Reinforcement Learning to improve classical classification or detection tasks are few and far between. In this talk, I will share a technique and our experiences of applying D-RL on improving the distribution input datasets to achieve state of the art performance, specifically on object detection tasks.

    Beyond open source datasets, when it comes to building neural networks for real-world problems, dataset matters, which is often small and skewed.The talk presents a few fresh perspectives on how to artificially increase the size of datasets while balancing the data distribution. We show that these ideas result in 2% to 3% increase in accuracy on popular object detection tasks, whereas small and skewed datasets yield up to 22% increase in model accuracies.

  • Added to My Schedule
    keyboard_arrow_down
    Dipanjan Sarkar

    Dipanjan Sarkar - Explainable Artificial Intelligence - Demystifying the Hype

    schedule  01:45 - 02:30 PM place Grand Ball Room 2 people 19 Interested

    The field of Artificial Intelligence powered by Machine Learning and Deep Learning has gone through some phenomenal changes over the last decade. Starting off as just a pure academic and research-oriented domain, we have seen widespread industry adoption across diverse domains including retail, technology, healthcare, science and many more. More than often, the standard toolbox of machine learning, statistical or deep learning models remain the same. New models do come into existence like Capsule Networks, but industry adoption of the same usually takes several years. Hence, in the industry, the main focus of data science or machine learning is more ‘applied’ rather than theoretical and effective application of these models on the right data to solve complex real-world problems is of paramount importance.

    A machine learning or deep learning model by itself consists of an algorithm which tries to learn latent patterns and relationships from data without hard-coding fixed rules. Hence, explaining how a model works to the business always poses its own set of challenges. There are some domains in the industry especially in the world of finance like insurance or banking where data scientists often end up having to use more traditional machine learning models (linear or tree-based). The reason being that model interpretability is very important for the business to explain each and every decision being taken by the model.However, this often leads to a sacrifice in performance. This is where complex models like ensembles and neural networks typically give us better and more accurate performance (since true relationships are rarely linear in nature).We, however, end up being unable to have proper interpretations for model decisions.

    To address and talk about these gaps, I will take a conceptual yet hands-on approach where we will explore some of these challenges in-depth about explainable artificial intelligence (XAI) and human interpretable machine learning and even showcase with some examples using state-of-the-art model interpretation frameworks in Python!

  • Added to My Schedule
    keyboard_arrow_down
    Ishita Mathur

    Ishita Mathur - How GO-FOOD built a Query Semantics Engine to help you find the food you want to order

    schedule  01:45 - 02:30 PM place Jupiter people 11 Interested

    Context: The Search problem

    GOJEK is a SuperApp: 19+ apps within an umbrella app. One of these is GO-FOOD, the first food delivery service in Indonesia and the largest food delivery service in Southeast Asia. There are over 300 thousand restaurants on the platform with a total of over 16 million dishes between them.

    Over two-thirds of those who order food online using GO-FOOD do so by utilising text search. Search engines are so essential to our everyday digital experience that we don’t think twice when using them anymore. Search engines involve two primary tasks: retrieval of documents and ranking them in order of relevance. While improving that ranking is an extremely important part of improving the search experience, actually understanding that query helps give the searcher exactly what they’re looking for. This talk will show you what we are doing to make it easy for users to find what they want.

    GO-FOOD uses the ElasticSearch stack with restaurant and dish indexes to search for what the user types. However, this results in only exact text matches and at most, fuzzy matches. We wanted to create a holistic search experience that not only personalised search results, but also retrieved restaurants and dishes that were more relevant to what the user was looking for. This is being done by not only taking advantage of ElasticSearch features, but also developing a Query semantics engine.

    Query Understanding: What & Why

    This is where Query Understanding comes into the picture: it’s about using NLP to correctly identify the search intent behind the query and return more relevant search results, it’s about the interpretation process even before the results are even retrieved and ranked. The semantic neighbours of the query itself become the focus of the search process: after all, if I don’t understand what you’re trying to ask for, how will I give you what you want?

    In the duration of this talk, you will learn about how we are taking advantage of word embeddings to build a Query Understanding Engine that is holistically designed to make the customer’s experience as smooth as possible. I will go over the techniques we used to build each component of the engine, the data and algorithmic challenges we faced and how we solved each problem we came across.

  • Added to My Schedule
    keyboard_arrow_down
    Arun Krishnaswamy

    Arun Krishnaswamy - Federated Deep Learning in SAAS Applications

    schedule  01:45 - 02:30 PM place Neptune people 6 Interested

    ML in Saas Applications becomes exceedingly difficult due to lack to access to customer data. The Customer Data is locked down with no outside access and it presents a huge problem to do ML on this data in a traditional way. The focus of this presentation is to provide alternate solutions to do ML in a distributed fashion.

    We will focus on Split Neural Networks - a relatively new distributed ML Technique to solve Data Access issues with a SAAS Application.

    We will walk through the motivations behind a federated approach to ML .

    We will go through some concrete examples that are already using this technique.

    We will also understand the complexity behind handling the gradient descent process in federated deep learning techniques.

02:45
  • Added to My Schedule
    keyboard_arrow_down
    Venkatraman J

    Venkatraman J - Entity Co-occurence and Entity Reputation scoring from Unstructured data using Semantic Knowledge graph

    schedule  02:45 - 03:05 PM place Grand Ball Room 1 people 17 Interested

    Knowledge representation has been a research for many years in AI world and its continuing further too. Once knowledge is represented, reasoning from that extracted knowledge is done by various inferencing techniques. Initial knowledge bases were built using rules from domain experts and different inferencing techniques like Fuzzy inference, Bayesian inference were applied to extract reasoning from those knowledge bases. Semantic networks is another form of knowledge representation which can represent structured data like WordNet, DBpedia which solves problems in a specific domain by storing entities and relations among entities using onotologies.

    Knowledge graph is another representation technique deeply researched in academia as well as used by businesses in production to augment search relevancy in information retrieval(Google knowledgegraph), improve recommender systems, semantic search applications and also Question answering problems.In this talk i will illustrate the benefits of semantic knowledge graph, how it differs from Semantic ontologies, different technologies involved in building knowledge graph, how i built one to analyse unstructured (twitter data) to discover hidden relationships from the twitter corpus. I will also show how Knowledge graph is data scientist's tool kit to discover hidden relationships and insights from unstructured data quickly.

    In this talk i will show the technology and architecture used to determine entity reputation and entity co-occurence using Knowledge graph.Scoring an entity for reputation is useful in many Natural language processing tasks and applications such as Recommender systems.

  • Added to My Schedule
    keyboard_arrow_down
    JAYA SUSAN MATHEW

    JAYA SUSAN MATHEW - Breaking the language barrier: how do we quickly add multilanguage support in our AI application?

    schedule  02:45 - 03:05 PM place Grand Ball Room 2 people 4 Interested

    With the need to cater to a global audience, there is a growing demand for applications to support speech identification/translation/transliteration from one language to another. This session aims at introducing the audience to the topic, learn the inner working of the AI/ML models and eventually how to quickly use some of the readily available APIs to identify, translate or even transliterate speech/text within their application.

  • Added to My Schedule
    keyboard_arrow_down
    Johnu George

    Johnu George / Ramdoot Kumar P - A Scalable Hyperparameter Optimization framework for ML workloads

    schedule  02:45 - 03:05 PM place Jupiter people 11 Interested

    In machine learning, hyperparameters are parameters that governs the training process itself. For example, learning rate, number of hidden layers, number of nodes per layer are typical hyperparameters for neural networks. Hyperparameter Tuning is the process of searching the best hyper parameters to initialize the learning algorithm, thus improving training performance.

    We present Katib, a scalable and general hyper parameter tuning framework based on Kubernetes which is ML framework agnostic (Tensorflow, Pytorch, MXNet, XGboost etc). You will learn about Katib in Kubeflow, an open source ML toolkit for Kubernetes, as we demonstrate the advantages of hyperparameter optimization by running a sample classification problem. In addition, as we dive into the implementation details, you will learn how to contribute as we expand this platform to include autoML tools.

  • Added to My Schedule
    keyboard_arrow_down
    Keshav Peswani

    Keshav Peswani - Real time Anomaly detection on telemetry data using neural networks

    schedule  02:45 - 03:05 PM place Neptune people 11 Interested

    Description:

    Observability is the key in modern architecture to quickly detect and repair problems in microservices. Modern observability platforms have evolved beyond simple application logs and now include distributed tracing systems like Haystack. Combining them with real time intelligent alerting mechanisms with accurate alerts helps in automated detection of these problems.

    Abstract

    We at Expedia work on a mission of connecting people to places through the power of technology. To accomplish this, we build and run hundreds of micro-services that provide different functionalities to serve one single customer request. Now what happens when one or more services fail at the same time? We are going to look at how Expedia determines these failed services in automated manner and provide high quality of service, which has led to huge improvements in our mean time to detect(MTTD) and know (MTTK).

    In this talk, we will present the journey of distributed tracing in Expedia that started with Zipkin as a prototype and ended up building our own solution(in open source) using OpenTracing APIs . We will do a deep dive in our architecture and demonstrate how we ingest terabytes of tracing data in production for hundreds of our micro-services and use this data for trending service errors/latencies/rate. With the increasing number of microservices, there felt the need to have a real time intelligent alerting and monitoring system to contribute to the goal of reducing MTTD and MTTK and move towards 24/7 reliability.

    With unique behavioural patterns for each of the service errors, leveraging neural networks to understand the behaviour changes for each of the micro-service and raise alert was indeed a challenging task. The task uncovered a few unexpected challenges, and the solution was less straightforward than we initially estimated. But ultimately the anomaly detector using neural network produced results that beat our expectations, once again validating the interest in neurocomputing that is overtaking the industry.

    To achieve this, we predict the service failures in the microservices using recurrent neural networks on telmetry data and perform anomaly detection on predicted values. We will show how we train a recurrent neural network and auto-tune hyperparameters using Bayesian optimization methods. We will also deep dive into the architecture for the automated training pipeline and how the anomaly detection works in streaming manner using kafka(kstreams) as the backbone and model deployed on cloud in a cost effective manner. At the end , we will also discuss the possible areas for improvement to reduce false positives which includes having human intervention as the feedback loop.

03:05

    Coffee/Tea Break - 25 mins

03:30
  • Added to My Schedule
    keyboard_arrow_down
    Badri Narayanan Gopalakrishnan

    Badri Narayanan Gopalakrishnan / Shalini Sinha / Usha Rengaraju - Lifting Up: Deep Learning for implementing anti-hunger and anti-poverty programs

    schedule  03:30 - 04:15 PM place Grand Ball Room 1 people 18 Interested

    Ending poverty and zero hunger are top two goals United Nations aims to achieve by 2030 under its sustainable development program. Hunger and poverty are byproducts of multiple factors and fighting them require multi-fold effort from all stakeholders. Artificial Intelligence and Machine learning has transformed the way we live, work and interact. However economics of business has limited its application to few segments of the society. A much conscious effort is needed to bring the power of AI to the benefits of the ones who actually need it the most – people below the poverty line. Here we present our thoughts on how deep learning and big data analytics can be combined to enable effective implementation of anti-poverty programs. The advancements in deep learning , micro diagnostics combined with effective technology policy is the right recipe for a progressive growth of a nation. Deep learning can help identify poverty zones across the globe based on night time images where the level of light correlates to higher economic growth. Once the areas of lower economic growth are identified, geographic and demographic data can be combined to establish micro level diagnostics of these underdeveloped area. The insights from the data can help plan an effective intervention program. Machine Learning can be further used to identify potential donors, investors and contributors across the globe based on their skill-set, interest, history, ethnicity, purchasing power and their native connect to the location of the proposed program. Adequate resource allocation and efficient design of the program will also not guarantee success of a program unless the project execution is supervised at grass-root level. Data Analytics can be used to monitor project progress, effectiveness and detect anomaly in case of any fraud or mismanagement of funds.

  • Added to My Schedule
    keyboard_arrow_down
    Anuj Gupta

    Anuj Gupta - Continuous Learning Systems: Building ML systems that keep learning from their mistakes

    schedule  03:30 - 04:15 PM place Grand Ball Room 2 people 15 Interested

    Won't it be great to have ML models that can update their “learning” as and when they make mistake and correction is provided in real time? In this talk we look at a concrete business use case which warrants such a system. We will take a deep dive to understand the use case and how we went about building a continuously learning system for text classification. The approaches we took, the results we got.

    For most machine learning systems, “train once, just predict thereafter” paradigm works well. However, there are scenarios when this paradigm does not suffice. The model needs to be updated often enough. Two of the most common cases are:

    1. When the distribution is non-stationary i.e. the distribution of the data changes. This implies that with time the test data will have very different distribution from the training data.
    2. The model needs to learn from its mistakes.

    While (1) is often addressed by retraining the model, (2) is often addressed using batch update. Batch updation requires collecting a sizeable number of feedback points. What if you have much fewer feedback points? You need model that can learn continuously - as and when model makes a mistake and feedback is provided. To best of our knowledge there is a very limited literature on this.

  • Added to My Schedule
    keyboard_arrow_down
    Aditya Singh Tomar

    Aditya Singh Tomar - Building Your Own Data Visualization Platform

    schedule  03:30 - 04:15 PM place Jupiter people 11 Interested

    Ever thought about having a mini interactive visualization tool that caters to your specific requirements. That is the product I created when I started independent consulting. 2 years since, and I have now decided to make it public – even the source code.

    This session will give you an overview about creating a custom, personalized version of a visualization platform built on R and Shiny. We will focus on a mix of structure and flexibility to address the varying requirements. We will look at the code itself and the various components involved while exploring the customization options available to ensure that the outcome is truly a personal product.

  • Added to My Schedule
    keyboard_arrow_down
    Paolo Tamagnini

    Paolo Tamagnini / Kathrin Melcher - Guided Analytics - Building Applications for Automated Machine Learning

    schedule  03:30 - 05:00 PM place Neptune people 10 Interested

    In recent years, a wealth of tools has appeared that automate the machine learning cycle inside a black box. We take a different stance. Automation should not result in black boxes, hiding the interesting pieces from everyone. Modern data science should allow automation and interaction to be combined flexibly into a more transparent solution.

    In some specific cases, if the analysis scenario is well defined, then full automation might make sense. However, more often than not, these scenarios are not that well defined and not that easy to control. In these cases, a certain amount of interaction with the user is highly desirable.

    By mixing and matching interaction with automation, we can use Guided Analytics to develop predictive models on the fly. More interestingly, by leveraging automated machine learning and interactive dashboard components, custom Guided Analytics Applications, tailored to your business needs, can be created in a few minutes.

    We'll build an application for automated machine learning using KNIME Software. It will have an input user interface to control the settings for data preparation, model training (e.g. using deep learning, random forest, etc.), hyperparameter optimization, and feature engineering. We'll also create an interactive dashboard to visualize the results with model interpretability techniques. At the conclusion of the workshop, the application will be deployed and run from a web browser.

04:30
  • schedule  04:30 - 05:15 PM place Grand Ball Room 1 people 10 Interested

    With the big boom in Data Science and Analytics Industry in India, a lot of data scientists are keen on learning a variety of learning algorithms and data manipulation techniques. At the same time, there is this growing interest among data scientists to give back to the society, harness their acquired skills and help fix some of the major burning problems in the nation. But how does one go about finding meaningful datasets connecting to societal problems and plan data-for-good projects? This session will summarize our experience of working in Data-for-Good sector in last 5 years, sharing few interesting datasets and associated use-cases of employing machine learning and artificial intelligence in social sector. Indian social sector is replete with good volume of open data on attributes like annotated images, geospatial information, time-series, Indic languages, Satellite Imagery, etc. We will dive into understanding journey of a Data-for-Good project, getting essential open datasets and understand insights from certain data projects in development sector. Lastly, we will explore how we can work with various communities and scale our algorithmic experiments in meaningful contributions.

  • Added to My Schedule
    keyboard_arrow_down
    Subhasish Misra

    Subhasish Misra - Causal data science: Answering the crucial ‘why’ in your analysis.

    schedule  04:30 - 05:15 PM place Grand Ball Room 2 people 15 Interested

    Causal questions are ubiquitous in data science. For e.g. questions such as, did changing a feature in a website lead to more traffic or if digital ad exposure led to incremental purchase are deeply rooted in causality.

    Randomized tests are considered to be the gold standard when it comes to getting to causal effects. However, experiments in many cases are unfeasible or unethical. In such cases one has to rely on observational (non-experimental) data to derive causal insights. The crucial difference between randomized experiments and observational data is that in the former, test subjects (e.g. customers) are randomly assigned a treatment (e.g. digital advertisement exposure). This helps curb the possibility that user response (e.g. clicking on a link in the ad and purchasing the product) across the two groups of treated and non-treated subjects is different owing to pre-existing differences in user characteristic (e.g. demographics, geo-location etc.). In essence, we can then attribute divergences observed post-treatment in key outcomes (e.g. purchase rate), as the causal impact of the treatment.

    This treatment assignment mechanism that makes causal attribution possible via randomization is absent though when using observational data. Thankfully, there are scientific (statistical and beyond) techniques available to ensure that we are able to circumvent this shortcoming and get to causal reads.

    The aim of this talk, will be to offer a practical overview of the above aspects of causal inference -which in turn as a discipline lies at the fascinating confluence of statistics, philosophy, computer science, psychology, economics, and medicine, among others. Topics include:

    • The fundamental tenets of causality and measuring causal effects.
    • Challenges involved in measuring causal effects in real world situations.
    • Distinguishing between randomized and observational approaches to measuring the same.
    • Provide an introduction to measuring causal effects using observational data using matching and its extension of propensity score based matching with a focus on the a) the intuition and statistics behind it b) Tips from the trenches, basis the speakers experience in these techniques and c) Practical limitations of such approaches
    • Walk through an example of how matching was applied to get to causal insights regarding effectiveness of a digital product for a major retailer.
    • Finally conclude with why understanding having a nuanced understanding of causality is all the more important in the big data era we are into.
  • Added to My Schedule
    keyboard_arrow_down
    Venkata Pingali

    Venkata Pingali - Accelerating ML using Production Feature Engineering Platform

    schedule  04:30 - 05:15 PM place Jupiter people 13 Interested

    Anecdotally only 2% of the models developed are productionized, i.e., used day to day to improve business outcomes. Part of the reason is the high cost and complexity of productionization of models. It is estimated to be anywhere from 40 to 80% of the overall work.

    In this talk, we will share Scribble Data’s insights into productionization of ML, and how to reduce the cost and complexity in organizations. It is based on the last two years of work at Scribble developing and deploying production ML Feature Engineering Platform, and study of platforms from major organizations such as Uber. This talk expands on a previous talk given in January.

    First, we discuss the complexity of production ML systems, and where time and effort goes. Second, we give an overview of feature engineering, which is an expensive ML task, and the associated challenges Third, we suggest an architecture for Production Feature Engineering platform. Last, we discuss how one could go about building one for your organization

05:00
  • Added to My Schedule
    keyboard_arrow_down
    Rahul Agarwal

    Rahul Agarwal - Continuous Data Integrity Tracking

    schedule  05:00 - 05:20 PM place Neptune people 6 Interested

    "In God we trust; all others must bring data." - W. E. Deming, Author & Professor

    This philosophy is imbibed in the very core of American Express being a data-driven company makes all strategic decisions based on numbers. But who ensures that numbers are correct? That is the work of Data Quality and Governance. Given the dependency on Data, ensuring Data quality is one of our prime responsibilities

    At American Express, we have Data getting generated and stored across multiple platforms. For example, in a market like the US, we process more than ~200 transactions every second and make an authorization decision. Given this speed and scale of data generation, ensuring Data quality becomes imperative and a unique challenge in itself. There are hundreds of models running in production platforms within AMEX having thousands of variables. Many variables are created/populated originally in legacy systems (or have components derived from there) which are then passed onto downstream systems for manipulation and creating new attributes. A tech glitch or a logic issue could impact any variable at any point of this process resulting in disastrous consequences in model outputs which can get transformed into real-world customer impact leading to financial and reputational risk for the bank. So how do we catch these anomalies before they adversely impact processes?

    Traditional approaches to anomaly detection have relied on measuring the deviation from the mean of the variable. The more fancy ones employ time series based forecasting. But both these approaches are fraught with high levels of false positives. When every alert generated has to be analyzed by the business which has a cost, high levels of accuracy is desired. In this talk, we will discuss how AMEX has approached and solved this problem.

05:30
  • Added to My Schedule
    keyboard_arrow_down
    Amar Lalwani

    Amar Lalwani - AI in Education: Transforming Education using Personalised Adaptive Learning

    schedule  05:30 - 06:15 PM place Grand Ball Room 1 people 7 Interested

    There has been a significant rise in the gross enrolment ratio of the students in public schools over the past few decades. However, there is a decline in their learning outcomes, which results from staff crunch, crowded classrooms and insufficient infrastructure. Moreover, students are learning less as they move to higher classes. National Achievement Survey – 2017 shows that the national average score of a grade 8 student was barely 40% in Maths, Science and Social Studies. The survey also highlights the fact the country is short of at least 10 lakhs qualified teachers. With the advent of technology and AI, Personalised Adaptive Learning solutions might solve the current education crisis.

    With the belief that every child is unique, funtoot, an Intelligent Tutoring System designs a personalised learning path for each child. Funtoot tailors the teaching instructions according to the knowledge states of each learner and leads the learner towards her unique learning trajectory. Funtoot is used by more than 1.5 lakh school students (Grades 2 to 9) across different states in India.

    In this talk, we will go deep into the architecture of an Intelligent Tutoring System. We will start with Domain Model which helps deconstruct the knowledge. We will then move to Student Model which is an overlay on Domain Model used to estimate the students' knowledge states. We will also touch upon the Tutor Model to understand how the student's cognitive and affective states are used to design the student's personalised learning path.

  • Added to My Schedule
    keyboard_arrow_down
    Avishkar Gupta

    Avishkar Gupta / Dipanjan Sarkar - Leveraging AI to Enhance Developer Productivity & Confidence

    schedule  05:30 - 06:15 PM place Grand Ball Room 2 people 10 Interested

    A major approach to the application of AI is leveraging it to create a safer world around us, as well as that of helping people make choices. With the open source revolution having taken the world by a storm and developers relying on various upstream third party dependencies (too many to chose from!:http://www.modulecounts.com/) to develop applications moving petabytes of sensitive data and mission critical code that can lead to disastrous failures, it is required now more than ever to build better developer tooling to help developers make safer, better choices in terms of their dependencies as well as providing them with more insights around the code they are using.

    Though we are data scientists, at heart we are also developers building intelligent systems powered by AI. We, the Redhat developer group through our “Dependency Analytics” platform and extension, seeks to do the same. We call this, 'AI-based insights for developers by developers'! In this session we would be going into the details of the deep learning models we have implemented and deployed to solve two major problems:

    1. Dependency Recommendations: Recommend dependencies to a user for their specific application stack by trying to guess their intent as well as an overview of how we maintain and manage these production AI systems.
    2. Pro-active Security and Vulnerability Analysis: We would also touch upon how our platform aims to make developer applications safer by way of CVE (Common Vulnerabilities and Exposures) analyses and the experimental deep learning models we have built to proactively identify potential vulnerabilities. This shall be followed by a short architectural overview of the entire platform.

    If we have enough time, we intend to showcase some sample code as a part of a tutorial of how we built these deep learning models and do a walkthrough of the same!

  • Added to My Schedule
    keyboard_arrow_down
    Akshay Bahadur

    Akshay Bahadur - Minimizing CPU utilization for deep networks

    schedule  05:30 - 06:15 PM place Jupiter people 11 Interested

    The advent of machine learning along with its integration with computer vision has enabled users to efficiently to develop image-based solutions for innumerable use cases. A machine learning model consists of an algorithm which draws some meaningful correlation between the data without being tightly coupled to a specific set of rules. It's crucial to explain the subtle nuances of the network along with the use-case we are trying to solve. With the advent of technology, the quality of the images has increased which in turn has increased the need for resources to process the images for building a model. The main question, however, is to discuss the need to develop lightweight models keeping the performance of the system intact.
    To connect the dots, we will talk about the development of these applications specifically aimed to provide equally accurate results without using much of the resources. This is achieved by using image processing techniques along with optimizing the network architecture.
    These applications will range from recognizing digits, alphabets which the user can 'draw' at runtime; developing state of the art facial recognition system; predicting hand emojis, developing a self-driving system, detecting Malaria and brain tumor, along with Google's project of 'Quick, Draw' of hand doodles.
    In this presentation, we will discuss the development of such applications with minimization of CPU usage.

  • Added to My Schedule
    keyboard_arrow_down
    Anil Arora

    Anil Arora - Building Machine Learning models from scratch and Deploying in downstream Applications

    schedule  05:30 - 06:15 PM place Neptune people 8 Interested

    The session would start with a brief introduction of the evolutionary transformation of SAS platform for about 5-7 min. Followed by a jump right into the more exciting part of the session with a demo on how to build machine learning models right from scratch. This session would also emphasize and cover the need for feature engineering before building any Machine Learning models. Many organizations still face resistance in building ML models due to loss of model interpretation, hence we will see how can ML models be interpreted in SAS with various out of the box statistics. The demo would also cover the AutoML functionality to give a kickstart for data scientist for developing and refining (if needed) the ML models. At the end, the demo will cover how to consume or deploy the models in downstream applications like mobile, websites, etc. along with model governance. For the pure open source data science people the demo would conclude with how they can embrace and extend the power of open source with SAS

06:30

    Panel on Ethics - 45 mins

07:30

    Reception Dinner & Networking - 150 mins

ODSC India Day 2

Fri, Aug 9
09:00
  • Added to My Schedule
    keyboard_arrow_down
    Grant Sanderson

    Grant Sanderson - Concrete before Abstract

    schedule  09:00 - 09:45 AM place Grand Ball Room people 26 Interested

    This talk outlines a principle of technical communication which seems simple at first but is devilishly difficult to abide by. It's a principle I try to keep in mind when creating videos aimed at making math and related fields more accessible, and it stands to benefit anyone who regularly needs to describe mathematical ideas in their work. Put simply, it's to resist the temptation to open a topic by describing a general result or definition, and instead let examples precede generality. More than that, it's about finding the type of example which guides the audience to rediscover the general results for themselves. We'll look, aptly enough, at examples of what I mean by this, why it's deceptively difficult to follow, and why this ordering matters.

10:00

    Important Announcements - 15 mins

10:15

    Coffee/Tea Break - 30 mins

10:45
  • schedule  10:45 - 11:30 AM place Grand Ball Room 1 people 13 Interested

    In recent years, there has been a lot of research in the area of sequence to sequence learning with neural network models. These models are widely used for applications such as language modeling, translation, part of speech tagging, and automatic speech recognition. In this talk, we will give an overview of sequence to sequence learning, starting with a description of recurrent neural networks (RNNs) for language modeling. We will then explain some of the drawbacks of RNNs, such as their inability to handle input and output sequences of different lengths, and describe how encoder-decoder networks, and attention mechanisms solve these problems. We will close with some real-world examples, including how encoder-decoder networks are used at LinkedIn.

  • Added to My Schedule
    keyboard_arrow_down
    Dat Tran

    Dat Tran - Image ATM - Image Classification for Everyone

    schedule  10:45 - 11:30 AM place Grand Ball Room 2 people 13 Interested

    At idealo.de we store and display millions of images. Our gallery contains pictures of all sorts. You’ll find there vacuum cleaners, bike helmets as well as hotel rooms. Working with huge volume of images brings some challenges: How to organize the galleries? What exactly is in there? Do we actually need all of it?

    To tackle these problems you first need to label all the pictures. In 2018 our Data Science team completed four projects in the area of image classification. In 2019 there were many more to come. Therefore, we decided to automate this process by creating a software we called Image ATM (Automated Tagging Machine). With the help of transfer learning, Image ATM enables the user to train a Deep Learning model without knowledge or experience in the area of Machine Learning. All you need is data and spare couple of minutes!

    In this talk we will discuss the state-of-art technologies available for image classification and present Image ATM in the context of these technologies. We will then give a crash course of our product where we will guide you through different ways of using it - in shell, on Jupyter Notebook and on the Cloud. We will also talk about our roadmap for Image ATM.

  • Added to My Schedule
    keyboard_arrow_down
    Dr. Vikas Agrawal

    Dr. Vikas Agrawal - Non-Stationary Time Series: Finding Relationships Between Changing Processes for Enterprise Prescriptive Systems

    schedule  10:45 - 11:30 AM place Jupiter people 8 Interested

    It is too tedious to keep on asking questions, seek explanations or set thresholds for trends or anomalies. Why not find problems before they happen, find explanations for the glitches and suggest shortest paths to fixing them? Businesses are always changing along with their competitive environment and processes. No static model can handle that. Using dynamic models that find time-delayed interactions between multiple time series, we need to make proactive forecasts of anomalous trends of risks and opportunities in operations, sales, revenue and personnel, based on multiple factors influencing each other over time. We need to know how to set what is “normal” and determine when the business processes from six months ago do not apply any more, or only applies to 35% of the cases today, while explaining the causes of risk and sources of opportunity, their relative directions and magnitude, in the context of the decision-making and transactional applications, using state-of-the-art techniques.

    Real world processes and businesses keeps changing, with one moving part changing another over time. Can we capture these changing relationships? Can we use multiple variables to find risks on key interesting ones? We will take a fun journey culminating in the most recent developments in the field. What methods work well and which break? What can we use in practice?

    For instance, we can show a CEO that they would miss their revenue target by over 6% for the quarter, and tell us why i.e. in what ways has their business changed over the last year. Then we provide the prioritized ordered lists of quickest, cheapest and least risky paths to help turn them over the tide, with estimates of relative costs and expected probability of success.

  • Added to My Schedule
    keyboard_arrow_down
    Govind Chada

    Govind Chada - Using 3D Convolutional Neural Networks with Visual Insights for Classification of Lung Nodules and Early Detection of Lung Cancer

    schedule  10:45 - 11:30 AM place Neptune people 6 Interested

    Lung cancer is the leading cause of cancer death among both men and women in the U.S., with more than a hundred thousand deaths every year. The five-year survival rate is only 17%; however, early detection of malignant lung nodules significantly improves the chances of survival and prognosis.

    This study aims to show that 3D Convolutional Neural Networks (CNNs) which use the full 3D nature of the input data perform better in classifying lung nodules compared to previously used 2D CNNs. It also demonstrates an approach to develop an optimized 3D CNN that performs with state of art classification accuracies. CNNs, like other deep neural networks, have been black boxes giving users no understanding of why they predict what they predict. This study, for the first time, demonstrates that Gradient-weighted Class Activation Mapping (Grad-CAM) techniques can provide visual explanations for model decisions in lung nodule classification by highlighting discriminative regions. Several CNN architectures using Keras and TensorFlow were implemented as part of this study. The publicly available LUNA16 dataset, comprising 888 CT scans with candidate nodules manually annotated by radiologists, was used to train and test the models. The models were optimized by varying the hyperparameters, to reach accuracies exceeding 90%. Grad-CAM techniques were applied to the optimized 3D CNN to generate images that provide quality visual insights into the model decision making. The results demonstrate the promise of 3D CNNs as highly accurate and trustworthy classifiers for early lung cancer detection, leading to improved chances of survival and prognosis.

11:45
  • schedule  11:45 AM - 12:30 PM place Grand Ball Room 1 people 8 Interested

    Dr.Vikram Vij, Senior Vice President, Head of Voice Intelligence Team, Samsung Research India – Bangalore (SRIB) will share the journey that Samsung has undertaken in developing its Voice Assistant Bixby and particularly Automatic Speech Recognition(ASR) system that powers it. ASR is one of the complex engines that power modern virtual Assistants. Several independent components such as pre-processors (Acoustic Echo Cancellation, Noise Suppression, Neural Beam forming and so on), Wake word detectors, End-point detectors, Hybrid Decoders, Inverse Text Normalizers work together to make a complete ASR system. We are in an exciting period with tremendous advancements made in recent times. The development of End-to-End(E2E) ASR systems is one such advancement that has boosted recognition accuracy significantly and it has the potential to make speech recognition ubiquitous by fitting completely On-Device thereby bringing down the latency and cost and addressing the privacy concerns of the users. Samsung, the largest device maker on the planet, envisions a huge value in bringing Bixby to a variety of existing devices and new devices such as Social Robots, which throws many technical challenges particularly in making the ASR very robust. In this talk, Dr.Vikram is excited to present the cutting-edge technologies that his team is developing - Far-Field Speech Recognition, E2E ASR, Whisper Detection, Contextual End-Point Detection (EPD), On-device ASR and so on. He would also elaborate on the research activities his team is relentlessly pursuing.

  • Added to My Schedule
    keyboard_arrow_down
    Jeetendra Kumar Sharma

    Jeetendra Kumar Sharma / Vikas Grover - Leveraging Video Analytics at United Airlines: Calculating Deplaning Times Using Deep Learning

    schedule  11:45 AM - 12:30 PM place Grand Ball Room 2 people 10 Interested

    For United Airlines, running a Safe and Efficient airline is core to our business. And with such a complex operation, we need to constantly track key events that keep the airline running smoothly. While tracking these events can be time-intensive and laborious, we believe developments in deep learning and edge computing are going to help us simplify that process. Over the past few months, United’s Data Science team has been exploring how to leverage advances in computer vision to solve some of these problems. Our presentation will focus on solving one of these tasks: timing how long it takes for passengers to exit an aircraft. We’ll provide an overview of key concepts of video analytics, share how we leveraged open source technology to build a solution and provide a demonstration of our work.

  • schedule  11:45 AM - 12:30 PM place Jupiter people 15 Interested

    In todays world majority of information is generated by self sustaining systems like various kinds of bots, crawlers, servers, various online services, etc. This information is flowing on the axis of time and is generated by these actors under some complex logic. For example, a stream of buy/sell order requests by an Order Gateway in financial world, or a stream of web requests by a monitoring / crawling service in the web world, or may be a hacker's bot sitting on internet and attacking various computers. Although we may not be able to know the motive or intention behind these data sources. But via some unsupervised techniques we can try to infer the pattern or correlate the events based on their multiple occurrences on the axis of time. Thus we could automatically identify signatures of various actors and take appropriate actions.

    Sessionisation is one such unsupervised technique that tries to find the signal in a stream of events associated with a timestamp. In the ideal world it would resolve to finding periods with a mixture of sinusoidal waves. But for the real world this is a much complex activity, as even the systematic events generated by machines over the internet behave in a much erratic manner. So the notion of a period for a signal also changes in the real world. We can no longer associate it with a number, it has to be treated as a random variable, with expected values and associated variance. Hence we need to model "Stochastic periods" and learn their probability distributions in an unsupervised manner. This would be done via non-parametric Bayesian techniques with Gaussian prior.

    In this talk we will do a walk through of a real security use cases solved via Sessionisation for the SOC (Security Operations Centre) centre of an international firm with offices in 56 countries being monitored via a central SOC team.

    In this talk we will go through a Sessionisation technique based on stochastic periods. The journey would begin by extracting relevant data from a sequence of timestamped events. Then we would apply various techniques like FFT (Fast Fourier Transform), kernel density estimation, optimal signal selection, Gaussian Mixture Models, etc. and eventually discover patterns in time stamped events.

    Key concepts explained in talk: Sessionisation, Bayesian techniques of Machine Learning, Gaussian Mixture Models, Kernel density estimation, FFT, stochastic periods, probabilistic modelling

  • Added to My Schedule
    keyboard_arrow_down
    Anoop Kulkarni

    Anoop Kulkarni - Role of clinical judgement in AI powered healthcare

    schedule  11:45 AM - 12:30 PM place Neptune people 11 Interested

    Deep learning and machine learning have infested every known field in last couple of years. Healthcare has not remained immune to it either. However, there is much more to healthcare than improve AUC and reducing errors. The cost can be too high. This tutorial discusses trends of using AI in healthcare, from automating electronic health records and using it to predicting patientcare, radiology, ophthalmology to genomics, other omics and eventually personalized medicine. Once the AI powered healthcare starts yielding results "better than" doctors, then the clinical deployment becomes the next critical stage. Clinical judgement involves clinical research, experience and other supporting sciences. The presentation discusses from a generic ML/DL workflow, how unintentional small errors in each step can lead to spurious predictions.

    This tutorial will trace the journey of possibilities for deep learning in healthcare and how an integrated, holistic use will assist doctors and hospitals in providing targeted healthcare.

12:40
  • schedule  12:40 - 01:00 PM place Grand Ball Room 1 people 1 Interested

    Videos account for about 75% of Internet traffic today. Enterprises are creating more and more videos and using them for various informational purposes, including marketing, training of customers, partners & employees and internal communications. However, videos are considered as the blackholes of the Internet because it is very hard to see what’s inside them. The opaque nature of videos equally impacts end users who spend a lot of time navigating to their point of interest, leading to severe underutilization of videos as a powerful medium of information.

    In this talk, we will describe visual processing pipeline of VideoKen platform which includes

    1. Graph-based algorithm along with deep scene text detection to identify key visual frames in the video,
    2. FCN-based algorithm for semantic segmentation of screen content in visual frames,
    3. Transfer-learning based visual classifier to categorize screen content into different categories such as slides, code walkthrough, demo, handwritten, etc. and
    4. Algorithm to detect visual coherency and select indices from the video.

    We will discuss challenges and experiences in implementing/iterating on these algorithms using our experience with processing 100K+ video hours of content.

  • Added to My Schedule
    keyboard_arrow_down
    Ashay Tamhane

    Ashay Tamhane - Modeling Contextual Changes In User Behaviour In Fashion e-commerce

    schedule  12:40 - 01:00 PM place Grand Ball Room 2 people 8 Interested

    Impulse purchases are quite frequent in fashion e-commerce; browse patterns indicate fluid context changes across diverse product types probably due to the lack of a well-defined need at the consumer’s end. Data from fashion e-commerce portal indicate that the final product a person ends-up purchasing is often very different from the initial product he/she started the session with. We refer to this characteristic as a ‘context change’. This feature of fashion e-commerce makes understanding and predicting user behaviour quite challenging. Our work attempts to model this characteristic so as to both detect and preempt context changes. Our approach employs a deep Gated Recurrent Unit (GRU) over clickstream data. We show that this model captures context changes better than other non-sequential baseline models.

  • Added to My Schedule
    keyboard_arrow_down
    Bargava Subramanian

    Bargava Subramanian - Anomaly Detection for Cyber Security using Federated Learning

    schedule  12:40 - 01:00 PM place Jupiter people 2 Interested

    In a network of connected devices, there are two critical aspects of the system to succeed:

    1. Security – with a number of internet-connected devices, securing the network from cyber threats is very important.
    2. Privacy - The devices capture business sensitive data that the Organisation has to safeguard to maintain their differentiation.

    I've used Federated learning to build anomaly detection models that monitor data quality and cybersecurity – while preserving data privacy.

    Federated learning enables Edge devices to collaboratively learn deep learning models but keeping all of the data on the device itself. Instead of moving data to the cloud, the models are trained on the device and only the updates of the model are shared across the network.

    Using federated learning gave me the following advantages:

    • Ability to build more accurate models faster
    • Low latency during inference
    • Privacy-preserving
    • Improved energy efficiency of the devices

    I built deep learning models using tensorflow and deployed using uTensor. uTensor is a light-weight ML inference framework built on Mbed and Tensorflow.

    In this talk, I will discuss in detail on how I built federated learning models on the edge devices.

  • Added to My Schedule
    keyboard_arrow_down
    Sujoy Roychowdhury

    Sujoy Roychowdhury - Building Multimodal Deep learning recommendation Systems

    schedule  12:40 - 01:00 PM place Neptune people 18 Interested

    Recommendation systems aid in consumer decision making processes
    like what to buy, which books to read or movies to watch.
    Recommendation systems are specially useful in e-commerce websites
    where a user has to navigate through several hundred items
    in order to get to what they’re looking for . The data on how users
    interact with the systems can be used to analyze user behaviour and
    make recommendations that are in line with users’ preferences of
    certain item attributes over others. Collaborative filtering has, until
    recently, been able to achieve personalization through user based
    and item based collaborative filtering techniques. Recent advances
    in the application of Deep Learning in research as well as industry
    has led people to apply these techniques in recommendation systems.
    Many recommendation systems use product features for recommendations.
    However textual features available on products are
    almost invariably incomplete in real-world datasets due to various
    process related issues. Additionally, product features even when
    available cannot describe completely a certain feature. These limit
    the success of such recommendation techniques. Deep learning
    systems can process multi-modal data like text, images, audio and
    thus is our choice in implementing multi-modal recommendation
    system.
    In this talk we show a real-world application of a fashion recommendation
    system. This is based on a multi-modal deep learning system which is able to address the problem of poor annotation in the product data. We evaluate different deep learning architectures
    to process multi-modal data and compare their effectiveness. We
    highlight the trade-offs seen in a real-world implementation and
    how these trade-offs affect the actual choice of the architecture.

01:00

    Lunch - 45 mins

01:45
  • Added to My Schedule
    keyboard_arrow_down
    Dr. Dakshinamurthy V Kolluru

    Dr. Dakshinamurthy V Kolluru - Understanding Text: An exciting journey from Probabilistic Models to Neural Networks

    schedule  01:45 - 02:30 PM place Grand Ball Room 1 people 21 Interested

    We will trace the journey of NLP over the past 50 odd years. We will cover chronologically Hidden Markov Models, Elman networks, Conditional Random Fields, LSTMs, Word2Vec, Encoder-Decoder models, Attention models, transfer learning in text and finally transformer architectures. Our emphasis is going to be on how the models became powerful and simple to implement simultaneously. To demonstrate this, we take a few case studies solved at INSOFE with a primary goal of retaining accuracy while simplifying engineering. Traditional methods will be compared and contrasted against modern models and show how the latest models actually are becoming easier to implement by the business. We also explain how this enhanced comfort with text data is paving way for state of the art inclusive architectures

  • schedule  01:45 - 02:30 PM place Grand Ball Room 2 people 11 Interested

    Two branches of AI - Deep Learning, and Reinforcement Learning are now responsible for many real-world applications. Machine Translation, Speech Recognition, Object Detection, Robot Control, and Drug Discovery - are some of the numerous examples.

    Both approaches are data hungry - DL requires many examples of each class, and RL needs to play through many episodes to learn a policy. Contrast this to human intelligence. A small child can typically see an image just once, and instantly recognize it in other contexts and environments. We seem to possess an innate model/representation of how the world works, which helps us grasp new concepts and adapt to new situations fast. Humans are excellent one/few shot learners. We are able to learn complex tasks by observing and imitating other humans (eg: cooking, dancing or playing soccer) - despite having a different point of view, sense modalities, body structure, mental facility.

    Humans may be very good at picking up novel tasks, but Deep RL agents surpass us in performance. Once a Deep RL has learned a good representation [1], it is easy to surpass human performance in complex tasks like Go[2], Dota 2[3], and Starcraft[4]. We are biologically limited by time, memory and computation (A computer can be made to simulate thousands of plays in a minute).

    RL struggles with tasks that have sparse rewards. Take an example of a soccer playing robot - controlled by applying a torque to each one of its joints. The environment rewards you when it scores a goal. If the policy is initialized randomly (we apply a random torque to each joint, every few milliseconds) the probability of the robot scoring a goal is negligible - it won't even be able to learn how to stand up. In tasks requiring long term planning or low-level skills, getting to that initial reward can prove impossible. These situations have the potential to greatly benefit from a demonstration - in this case showing the robot how to walk and kick - and then letting it figure out how to score a goal.

    We have an abundance of visual data on humans performing various tasks, in the public domain, in the form of videos from sources like YouTube. In Youtube alone, 400 hours of videos are uploaded every minute, and it is easy to find demonstration videos for any skill imaginable. What if we could harness this by designing agents that could learn how to perform tasks - just by watching a video clip?

    Imitation Learning, also known as apprenticeship learning, teaches an agent a sequence of decisions through demonstration, often by a human expert. It has been used in many applications such as teaching drones how to fly[5] and autonomous cars how to drive[6] - It relies on domain engineered features - or extremely precise representations such as mocap [7]. Directly applying imitation learning to learn from videos proves challenging, there is a misalignment of representation between the demonstrations and the agent’s environment. For example: How can a robot sensing its world through a 3d point cloud - learn from a noisy 2d video clip of a soccer player dribbling?

    Leveraging recent advances in Reinforcement Learning, Self Supervised Learning and Imitation Learning [8] [9] [10], We present a technical deep dive into an end to end framework which:

    1) Has prior knowledge about the world intelligence through Self-Supervised Learning - A relatively new area which seeks to build efficient deep learning representations from unlabelled data but training on a surrogate task. The surrogate task can be rotating an image and predicting the rotation angle or cropping two patches of the image, and predicting their relative tasks - or a combination of several such objectives.

    2) Has the ability to align the representation of how it senses the world, with that of the video - allowing it to learn diverse tasks from video clips.

    3) Has the ability to reproduce a skill, from only a single demonstration - using applied techniques from imitation learning

    [1] https://www.cse.iitb.ac.in/~shivaram/papers/ks_adprl_2011.pdf

    [2] https://ai.google/research/pubs/pub44806

    [3] https://openai.com/five/

    [4] https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/

    [5] http://cs231n.stanford.edu/reports/2017/pdfs/614.pdf

    [6] https://arxiv.org/pdf/1709.07174.pdf

    [7] https://en.wikipedia.org/wiki/Motion_capture

    [8] https://arxiv.org/pdf/1704.06888v3.pdf

    [9] https://bair.berkeley.edu/blog/2018/06/28/daml/

    [10] https://arxiv.org/pdf/1805.11592v2.pdf

  • Added to My Schedule
    keyboard_arrow_down
    Dr. Vijay Srinivas Agneeswaran

    Dr. Vijay Srinivas Agneeswaran / Abhishek Kumar - Industrialized Capsule Networks for Text Analytics

    schedule  01:45 - 02:30 PM place Jupiter people 10 Interested

    Multi-label text classification is an interesting problem where multiple tags or categories may have to be associated with the given text/documents. Multi-label text classification occurs in numerous real-world scenarios, for instance, in news categorization and in bioinformatics (gene classification problem, see [Zafer Barutcuoglu et. al 2006]). Kaggle data set is representative of the problem: https://www.kaggle.com/jhoward/nb-svm-strong-linear-baseline/data.

    Several other interesting problem in text analytics exist, such as abstractive summarization [Chen, Yen-Chun 2018], sentiment analysis, search and information retrieval, entity resolution, document categorization, document clustering, machine translation etc. Deep learning has been applied to solve many of the above problems – for instance, the paper [Rie Johnson et. al 2015] gives an early approach to applying a convolutional network to make effective use of word order in text categorization. Recurrent Neural Networks (RNNs) have been effective in various tasks in text analytics, as explained here. Significant progress has been achieved in language translation by modelling machine translation using an encoder-decoder approach with the encoder formed by a neural network [Dzmitry Bahdanau et. al 2014].

    However, as shown in [Dan Rosa de Jesus et. al 2018] , certain cases require modelling the hierarchical relationship in text data and is difficult to achieve with traditional deep learning networks because linguistic knowledge may have to be incorporated in these networks to achieve high accuracy. Moreover, deep learning networks do not consider hierarchical relationships between local features as pooling operation of CNNs lose information about the hierarchical relationships.

    We show one industrial scale use case of capsule networks which we have implemented for our client in the realm of text analytics – news categorization. We explain how traditional deep learning methods may not be useful in the case when single-label data is only available for training (as in many real-life cases), while the test data set is multi-labelled – this is the sweet spot for capsule networks. We also discuss the key challenges faced industrialization of capsule networks – starting from providing a scalable implementation of capsule networks in TensorFlow, we show how capsule networks can be industrialized by providing an implementation on top of KubeFlow, which helps in productionization.

    1. History of impact of machine learning and deep learning on NLP.

    2. Motivation for capsule networks and how they can be used in text analytics.

    3. Implementation of capsule networks in TensorFlow.

    4. Industrialization of capsule nets with KubeFlow.

    References:

    [Zafer Barutcuoglu et. al 2006] Zafer Barutcuoglu, Robert E. Schapire, and Olga G. Troyanskaya. 2006. Hierarchical multi-label prediction of gene function. Bioinformatics 22, 7 (April 2006), 830-836. DOI=http://dx.doi.org/10.1093/bioinformatics/btk048

    [Rie Johnson et. al 2015] Rie Johnson, Tong Zhang: Effective Use of Word Order for Text Categorization with Convolutional Neural Networks. HLT-NAACL 2015: 103-112.

    [Dzmitry Bahdanau et. al 2014] Bahdanau, Dzmitry et al. “Neural Machine Translation by Jointly Learning to Align and Translate.” CoRR abs/1409.0473 (2014).

    [Dan Rosa de Jesus et. al 2018] Dan Rosa de Jesus, Julian Cuevas, Wilson Rivera, Silvia Crivelli (2018). “Capsule Networks for Protein Structure Classification and Prediction”,

    available at https://arxiv.org/abs/1808.07475.

    [Yequan Wang et. al 2018] Yequan Wang, Aixin Sun, Jialong Han, Ying Liu, and Xiaoyan Zhu. 2018. Sentiment Analysis by Capsules. In Proceedings of the 2018 World Wide Web Conference (WWW '18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 1165-1174. DOI: https://doi.org/10.1145/3178876.3186015

    Chen, Yen-Chun and Bansal, Mohit (2018), “Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting”, eprint arXiv:1805.11080.

  • Added to My Schedule
    keyboard_arrow_down
    Dr. C.S.Jyothirmayee

    Dr. C.S.Jyothirmayee / Usha Rengaraju / Vijayalakshmi Mahadevan - Deep learning powered Genomic Research

    schedule  01:45 - 03:15 PM place Neptune people 6 Interested

    The event disease happens when there is a slip in the finely orchestrated dance between physiology, environment and genes. Treatment with chemicals (natural, synthetic or combination) solved some diseases but others persisted and got propagated along the generations. Molecular basis of disease became prime center of studies to understand and to analyze root cause. Cancer also showed a way that origin of disease, detection, prognosis and treatment along with cure was not so uncomplicated process. Treatment of diseases had to be done case by case basis (no one size fits).

    With the advent of next generation sequencing, high through put analysis, enhanced computing power and new aspirations with neural network to address this conundrum of complicated genetic elements (structure and function of various genes in our systems). This requires the genomic material extraction, their sequencing (automated system) and analysis to map the strings of As, Ts, Gs, and Cs which yields genomic dataset. These datasets are too large for traditional and applied statistical techniques. Consequently, the important signals are often incredibly small along with blaring technical noise. This further requires far more sophisticated analysis techniques. Artificial intelligence and deep learning gives us the power to draw clinically useful information from the genetic datasets obtained by sequencing.

    Precision of these analyses have become vital and way forward for disease detection, its predisposition, empowers medical authorities to make fair and situationally decision about patient treatment strategies. This kind of genomic profiling, prediction and mode of disease management is useful to tailoring FDA approved treatment strategies based on these molecular disease drivers and patient’s molecular makeup.

    Now, the present scenario encourages designing, developing, testing of medicine based on existing genetic insights and models. Deep learning models are helping to analyze and interpreting tiny genetic variations ( like SNPs – Single Nucleotide Polymorphisms) which result in unraveling of crucial cellular process like metabolism, DNA wear and tear. These models are also responsible in identifying disease like cancer risk signatures from various body fluids. They have the immense potential to revolutionize healthcare ecosystem. Clinical data collection is not streamlined and done in a haphazard manner and the requirement of data to be amenable to a uniform fetchable and possibility to be combined with genetic information would power the value, interpretation and decisive patient treatment modalities and their outcomes.

    There is hugh inflow of medical data from emerging human wearable technologies, along with other health data integrated with ability to do quickly carry out complex analyses on rich genomic databases over the cloud technologies … would revitalize disease fighting capability of humans. Last but still upcoming area of application in direct to consumer genomics (success of 23andMe).

    This road map promises an end-to-end system to face disease in its all forms and nature. Medical research, and its applications like gene therapies, gene editing technologies like CRISPR, molecular diagnostics and precision medicine could be revolutionized by tailoring a high-throughput computing method and its application to enhanced genomic datasets.

02:45
  • Added to My Schedule
    keyboard_arrow_down
    Nicolas Dupuis

    Nicolas Dupuis - Using Deep-Learning to Accurately Diagnose Your Broadband Connection

    schedule  02:45 - 03:30 PM place Grand Ball Room 1 people 6 Interested

    Within Nokia Software Digital Experience, we build products that increase customer satisfaction and reduce churn through proactive identification of the user problems and that enable service providers to resolve problems faster. To achieve such tasks, ML and DL techniques are now contributing a lot to these successes. However, there is usually a long journey between building a first model up-to delivering a field-proven product. Besides providing highlights on how machine and deep learning are used today to boost the broadband connection, this talk will reveal some challenges encountered and best-practices involved to reach the expected quality level.

  • Added to My Schedule
    keyboard_arrow_down
    Kabir Rustogi

    Kabir Rustogi - Generation of Locality Polygons using Open Source Road Network Data and Non-Linear Multi-classification Techniques

    schedule  02:45 - 03:30 PM place Grand Ball Room 2 people 9 Interested

    One of the principal problems in the developing world is the poor localization of its addresses. This inhibits discoverability of local trade, reduces availability of amenities such as creation of bank accounts and delivery of goods and services (e.g., e-commerce) and delays emergency services such as fire brigades and ambulances. In general, people in the developing World identify an address based on neighbourhood/locality names and points of interest (POIs), which are neither standardized nor any official records exist that can help in locating them systematically. In this paper, we describe an approach to build accurate geographical boundaries (polygons) for such localities.

    As training data, we are provided with two pieces of information for millions of address records: (i) a geocode, which is captured by a human for the given address, (ii) set of localities present in that address. The latter is determined either by manual tagging or by using an algorithm which is able to take a raw address string as input and output meaningful locality information present in that address. For example, for the address, “A-161 Raheja Atlantis Sector 31 Gurgaon 122002”, its geocode is given as (28.452800, 77.045903), and the set of localities present in that address is given as (Raheja Atlantis, Sector 31, Gurgaon, Pin-code 122002). Development of this algorithm are part of any other project we are working on; details about the same can be found here.

    Many industries, such as the food-delivery industry, courier-delivery industry, KYC (know-your-customer) data-collection industry, are likely to have huge amounts of such data. Such crowdsourced data usually contain large a amount of noise, acquired either due to machine/human error in capturing the geocode, or due to error in identifying the correct set of localities from a poorly written address. For example, for the address, “Plot 1000, Sector 31 opposite Sector 40 road, Gurgaon 122002”, a machine may output the set of localities present in this address as (Sector 31, Sector 40, Gurgaon, Pin-code 122002), even though it is clear that the address does not lie in Sector 40.

    The solution described in this paper is expected to consume the provided data and output polygons for each of the localities identified in the address data. We assume that the localities for which we must build polygons are non-overlapping, e.g., this assumption is true for pin-codes. The problem is solved in two phases.

    In the first phase, we separate the noisy points from the points that lie within a locality. This is done by formulating the problem as a non-linear multi-classification problem. The latitudes and longitudes of all non-overlapping localities act as features, and their corresponding locality name acts as a label, in the training data. The classifier is expected to partition the 2D space containing the latitudes and longitudes of the union of all non-overlapping localities into disjoint regions corresponding to each locality. These partitions are defined as non-linear boundaries, which are obtained by optimizing for two objectives: (i) the area enclosed by the boundaries should maximize the number of points of the corresponding locality and minimize the number of points of other localities, (ii) the separation boundary should be smooth. We compare two algorithms, decision trees and neural networks for creating such partitions.

    In the second phase, we extract all the points that satisfy the partition constraints, i.e., lie within the boundary of a locality L, as candidate points, for generating the polygon for locality L. The resulting polygon must contain all candidate points and should have the minimum possible area while maintaining the smoothness of the polygon boundary. This objective can be achieved by algorithms such as concave hull. However, since localities are always bounded by roads, we have further enhanced our locality polygons by leveraging open source data of road networks. To achieve this, we solve a non-linear optimisation problem which decides the set of roads to be selected, so that the enclosed area is minimized, while ensuring that all the candidate points lie within the enclosed area. The output of this optimisation problem is a set of roads, which represents the boundary of a locality L.

  • Added to My Schedule
    keyboard_arrow_down
    Karthik Bharadwaj T

    Karthik Bharadwaj T - Failure Detection using Driver Behaviour from Telematics

    schedule  02:45 - 03:30 PM place Jupiter people 5 Interested

    Telematics data have a potential to unlock revenue of 1.5 trillion. Unfortunately this data has not been tapped by many users.

    In this case study Karthik Thirumalai would discuss how we can use telematics data to identify driver behaviour and do preventive maintenance in automobile.

03:30

    Coffee/Tea Break - 30 mins

04:00
  • Added to My Schedule
    keyboard_arrow_down
    Dr. Rohit M. Lotlikar

    Dr. Rohit M. Lotlikar - Overcoming data limitations in real-world data science initiatives

    schedule  04:00 - 04:45 PM place Grand Ball Room 1 people 6 Interested

    “Is this the only data you have?” An expression of surprise not uncommonly encountered when evaluating a new opportunity to apply data science. Suitability of available data is a key factor in the abandonment of many otherwise well considered data science initiatives.

    "Could the folks who were responsible for the design of the business process and the supporting IT applications not been more forward thinking and captured the more of the relevant data? To make it even worse, for the data that is being captured, the manual entries are not even consistent between the operators."

    Well, don't throw up you hands just yet. If you are a relatively newly minted data scientist, you are probably used to data being served to you on a platter! (Kaggle, UCI, Imagenet ..add your favourite platter to the list)

    Generally 3 types of challenges are present..

    • At one extreme.. They are building a new app. They want to incorporate a recommendation engine. The app is not released ! There is no data, zero, nada, zilch..
    • At the other extreme.. I want us to build a up-sell engine. They have a massive database with a huge number of tables. If I just look for revenue related fields, I see 10 different customer revenue fields! Which is the right one to use!!
    • The client wants me to build a promotion targeting engine. But they keep changing their offers every month! By the time I have enough data for a promotion, they are ready to kill that promotion move on to some other promotion..
    • They want to build a decision support engine. But the available attributes capture only 20-30% of what goes into making the decision. How it this going to be of any help?

    Sounds familiar? You are not alone. The speaker using case studies from his own experience will guide the audience on how they can make the best of the situation and deliver a value adding data science solution, or how to decide whether it is more prudent to not pursue it after all.

  • Added to My Schedule
    keyboard_arrow_down
    Yogesh H. Kulkarni

    Yogesh H. Kulkarni - MidcurveNN: Encoder-Decoder Neural Network for Computing Midcurve of a Thin Polygon

    schedule  04:00 - 04:45 PM place Grand Ball Room 2 people 6 Interested

    Various applications need lower dimensional representation of shapes. Midcurve is one- dimensional(1D) representation of a two-dimensional (2D) planar shape. It is used in applications such as animation, shape matching, retrieval, finite element analysis, etc. Methods available to compute midcurves vary based on the type of the input shape (images, sketches, etc.) and processing approaches such as Thinning, Medial Axis Transform (MAT), Chordal Axis Transform (CAT), Straight Skeletons, etc., all of which are rule-based.

    This presentation talks about a novel method called MidcurveNN which uses Encoder-Decoder neural network for computing midcurve from images of 2D thin polygons in supervised learning manner. This dimension reduction transformation from input 2D thin polygon image to output 1D midcurve image is learnt by the neural network, which can then be used to compute midcurve of an unseen 2D thin polygonal shape.

  • Added to My Schedule
    keyboard_arrow_down
    Dr. Saptarsi Goswami

    Dr. Saptarsi Goswami - Mastering feature selection: basics for developing your own algorithm

    schedule  04:00 - 04:45 PM place Jupiter people 7 Interested

    Feature selection is one of the most important processes for pattern recognition, machine learning and data mining problems. A successful feature selection method facilitates improvement of learning model performance and interpretability as well as reduces computational cost of the classifier by dimensionality reduction of the data. Feature selection is computationally expensive and becomes intractable even for few 100 features. This is a relevant problem because text, image and next generation sequence data all are inherently high dimensional. In this talk, I will discuss about few algorithms we have developed in last 5/6 years. Firstly, we will set the context of feature selection ,with some open issues , followed by definition and taxonomy. Which will take about 20 odd minutes. Then in next 20 minutes we will discuss couple of research efforts where we have improved feature selection for textual data and proposed a graph based mechanism to view the feature interaction. After the talk, participants will be appreciate the need of feature selection, the basic principles of feature selection algorithm and finally how they can start developing their own models

  • schedule  04:00 - 04:45 PM place Neptune people 8 Interested

    Logistics companies, both old and new, have invested heavily in building an efficient frontline workforce to provide swift and convenient services to their users. Timely delivery is often a critical deciding factor for the ever-impatient customers to choose service A over service B. Hence, operations/logistic team is the key enabler here.

    The attrition rate in large frontline teams is high, close to 75 percent annually. Yet most companies have aggressive growth targets, necessitating recruitment of high volumes of workers constantly. High-growth companies in this domain like Zomato and Swiggy, grew by more than 50-60 percent by the end of 2018, recruited tens of thousands of delivery boys every month.

    At Vahan, we have developed an AI-driven virtual assistant that helps logistics companies scale and automate their hiring process by leveraging the common addiction of messaging applications like WhatsApp and FB messenger.

    In this talk, I will cover in detail how we developed a complete data collection and natural language processing pipeline for Indian languages and built a chatbot over Whatsapp which is currently connecting companies like Dunzo, Zomato, Swiggy & Rapido Express with potential frontline workers and fulfilling the hiring requirements of this industry in a scalable and autonomous fashion.

05:00

    Closing Keynote - 45 mins

05:45

    Closing Talk - 15 mins

Post-Conf Workshop

Sat, Aug 10
09:30

    Registration - 30 mins

10:00
  • Added to My Schedule
    keyboard_arrow_down
    Rahee Walambe

    Rahee Walambe / Vishal Gokhale - Processing Sequential Data using RNNs

    schedule  10:00 AM - 06:00 PM place Jupiter 1 people 31 Interested

    Data that forms the basis of many of our daily activities like speech, text, videos has sequential/temporal dependencies. Traditional deep learning models, being inadequate to model this connectivity needed to be made recurrent before they brought technologies such as voice assistants (Alexa, Siri) or video based speech translation (Google Translate) to a practically usable form by reducing the Word Error Rate (WER) significantly. RNNs solve this problem by adding internal memory. The capacities of traditional neural networks are bolstered with this addition and the results outperform the conventional ML techniques wherever the temporal dynamics are more important.
    In this full-day immersive workshop, participants will develop an intuition for sequence models through hands-on learning along with the mathematical premise of RNNs.

  • schedule  10:00 AM - 06:00 PM place Jupiter 2 people 21 Interested

    Modern statistics has become almost synonymous with machine learning, a collection of techniques that utilize today's incredible computing power. This two-part course focuses on the available methods for implementing machine learning algorithms in R, and will examine some of the underlying theory behind the curtain. We start with the foundation of it all, the linear model. We look how to assess model quality with traditional measures and cross-validation and visualize models with coefficient plots. Next we turn to penalized regression with the Elastic Net. After that we turn to Boosted Decision Trees utilizing xgboost. Along the way we learn modern techniques for preprocessing data.

  • Added to My Schedule
    keyboard_arrow_down
    Kathrin Melcher

    Kathrin Melcher / Paolo Tamagnini - Deep Dive into Data Science with KNIME Analytics Platform

    schedule  10:00 AM - 06:00 PM place Mars people 8 Interested

    In this course we will cover the major steps in a data science project. From data access, data pre-processing, and data visualization, to machine learning, model optimization, and deployment using KNIME Analytics Platform.

  • Added to My Schedule
    keyboard_arrow_down

    Overview of Deep Real Learnathon

    schedule  10:00 - 10:10 AM place Neptune people 2 Interested
10:10
  • schedule  10:10 - 11:40 AM place Neptune people 3 Interested

    Machine learning and deep learning have been rapidly adopted in various spheres of medicine such as discovery of drug, disease diagnosis, Genomics, medical imaging and bioinformatics for translating biomedical data into improved human healthcare. Machine learning/deep learning based healthcare applications assist physicians to make faster, cheaper and more accurate diagnosis.

    We have successfully developed three deep learning based healthcare applications and are currently working on two more healthcare related projects. In this workshop, we will discuss one healthcare application titled "Deep Learning based Craniofacial Distance Measurement for Facial Reconstructive Surgery" which is developed by us using TensorFlow. Craniofacial distances play important role in providing information related to facial structure. They include measurements of head and face which are to be measured from image. They are used in facial reconstructive surgeries such as cephalometry, treatment planning of various malocclusions, craniofacial anomalies, facial contouring, facial rejuvenation and different forehead surgeries in which reliable and accurate data are very important and cannot be compromised.

    Our discussion on healthcare application will include precise problem statement, the major steps involved in the solution (deep learning based face detection & facial landmarking and craniofacial distance measurement), data set, experimental analysis and challenges faced & overcame to achieve this success. Subsequently, we will provide hands-on exposure to implement this healthcare solution using TensorFlow. Finally, we will briefly discuss the possible extensions of our work and the future scope of research in healthcare sector.

11:40
  • Added to My Schedule
    keyboard_arrow_down
    Joy Mustafi

    Joy Mustafi / Aditya Bhattacharya - Person Identification via Multi-Modal Interface with Combination of Speech and Image Data

    schedule  11:40 AM - 01:10 PM place Neptune people 3 Interested

    Multi-Modalities

    Having multiple modalities in a system gives more affordance to users and can contribute to a more robust system. Having more also allows for greater accessibility for users who work more effectively with certain modalities. Multiple modalities can be used as backup when certain forms of communication are not possible. This is especially true in the case of redundant modalities in which two or more modalities are used to communicate the same information. Certain combinations of modalities can add to the expression of a computer-human or human-computer interaction because the modalities each may be more effective at expressing one form or aspect of information than others. For example, MUST researchers are working on a personalized humanoid built and equipped with various types of input devices and sensors to allow them to receive information from humans, which are interchangeable and a standardized method of communication with the computer, affording practical adjustments to the user, providing a richer interaction depending on the context, and implementing robust system with features like; keyboard; pointing device; touchscreen; computer vision; speech recognition; motion, orientation etc.

    There are six types of cooperation between modalities, and they help define how a combination or fusion of modalities work together to convey information more effectively.

    • Equivalence: information is presented in multiple ways and can be interpreted as the same information
    • Specialization: when a specific kind of information is always processed through the same modality
    • Redundancy: multiple modalities process the same information
    • Complimentarity: multiple modalities take separate information and merge it
    • Transfer: a modality produces information that another modality consumes
    • Concurrency: multiple modalities take in separate information that is not merged

    Computer - Human Modalities

    Computers utilize a wide range of technologies to communicate and send information to humans:

    • Vision - computer graphics typically through a screen
    • Audition - various audio outputs

    Project Features

    Adaptive: They MUST learn as information changes, and as goals and requirements evolve. They MUST resolve ambiguity and tolerate unpredictability. They MUST be engineered to feed on dynamic data in real time.

    Interactive: They MUST interact easily with users so that those users can define their needs comfortably. They MUST interact with other processors, devices, services, as well as with people.

    Iterative and Stateful: They MUST aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They MUST remember previous interactions in a process and return information that is suitable for the specific application at that point in time.

    Contextual: They MUST understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulation, user profile, process, task and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided).

    Project Demos

    Multi-Modal Interaction: https://www.youtube.com/watch?v=jQ8Gq2HWxiA

    Gesture Detection: https://www.youtube.com/watch?v=rDSuCnC8Ei0

    Speech Recognition: https://www.youtube.com/watch?v=AewM3TsjoBk

    Assignment (Hands-on Challenge for Attendees)

    Real-time multi-modal access control system for authorized access to work environment - All the key concepts and individual steps will be demonstrated and explained in this workshop, and the attendees need to customize the generic code or approach for this assignment or hands-on challenge.

01:10
  • Added to My Schedule
    keyboard_arrow_down

    Food

    schedule  01:10 - 01:55 PM place Neptune people 3 Interested
01:55
03:25
  • Added to My Schedule
    keyboard_arrow_down
    Rishu Gupta

    Rishu Gupta / Amit Doshi - Addressing Deep Learning Challenges

    schedule  03:25 - 04:55 PM place Neptune people 4 Interested

    Deep learning is getting lots of attention lately and for good reason. It's achieving results that were not possible before. Though, getting started might not always be easy. MATLAB being an integrated framework allows you to accelerate building consumer and industrial applications while utilizing the capabilities of open-source frameworks like TensorFlow to train the deep learning networks.

    Join us for a hands-on MATLAB workshop, in which you will explore and learn about deep learning workflow in MATLAB while working on key concepts and challenges such as

    • Accelerating/Automating ground truth labeling for data
    • Designing and Validating deep neural networks
    • Training and tuning deep learning algorithms

    Also, we will be talking about the interoperability with different frameworks and workflow for deploying your deep learning algorithms to embedded targets.

04:55
  • Added to My Schedule
    keyboard_arrow_down

    Workshop 3

    schedule  04:55 - 05:55 PM place Neptune people 2 Interested