location_city Bengaluru schedule Aug 9th 01:45 - 02:30 PM place Grand Ball Room 2 people 142 Interested

Imitation Learning has been the backbone of Robots learning from demonstrator's behavior. Join us to know more about How to train a robot to perform task like acrobatics etc.

Two branches of AI - Deep Learning, and Reinforcement Learning are now responsible for many real-world applications. Machine Translation, Speech Recognition, Object Detection, Robot Control, and Drug Discovery - are some of the numerous examples.

Both approaches are data-hungry - DL requires many examples of each class, and RL needs to play through many episodes to learn a policy. Contrast this to human intelligence. A small child can typically see an image just once, and instantly recognize it in other contexts and environments. We seem to possess an innate model/representation of how the world works, which helps us grasp new concepts and adapt to new situations fast. Humans are excellent one/few shot learners. We are able to learn complex tasks by observing and imitating other humans (eg: cooking, dancing or playing soccer) - despite having a different point of view, sense modalities, body structure, mental facility.

Humans may be very good at picking up novel tasks, but Deep RL agents surpass us in performance. Once a Deep RL has learned a good representation [1], it is easy to surpass human performance in complex tasks like Go[2], Dota 2[3], and Starcraft[4]. We are biologically limited by time, memory and computation (A computer can be made to simulate thousands of plays in a minute).

RL struggles with tasks that have sparse rewards. Take an example of a soccer playing robot - controlled by applying a torque to each one of its joints. The environment rewards you when it scores a goal. If the policy is initialized randomly (we apply a random torque to each joint, every few milliseconds) the probability of the robot scoring a goal is negligible - it won't even be able to learn how to stand up. In tasks requiring long term planning or low-level skills, getting to that initial reward can prove impossible. These situations have the potential to greatly benefit from a demonstration - in this case showing the robot how to walk and kick - and then letting it figure out how to score a goal.

We have an abundance of visual data on humans performing various tasks, in the public domain, in the form of videos from sources like YouTube. In Youtube alone, 400 hours of videos are uploaded every minute, and it is easy to find demonstration videos for any skill imaginable. What if we could harness this by designing agents that could learn how to perform tasks - just by watching a video clip?

Imitation Learning, also known as apprenticeship learning, teaches an agent a sequence of decisions through demonstration, often by a human expert. It has been used in many applications such as teaching drones how to fly[5] and autonomous cars how to drive[6] - It relies on domain engineered features - or extremely precise representations such as mocap [7]. Directly applying imitation learning to learn from videos proves challenging, there is a misalignment of representation between the demonstrations and the agent’s environment. For example: How can a robot sensing its world through a 3d point cloud - learn from a noisy 2d video clip of a soccer player dribbling?

Leveraging recent advances in Reinforcement Learning, Self Supervised Learning and Imitation Learning [8] [9] [10], We present a technical deep dive into an end to end framework which:

1) Has prior knowledge about the world intelligence through Self-Supervised Learning - A relatively new area which seeks to build efficient deep learning representations from unlabelled data but training on a surrogate task. The surrogate task can be rotating an image and predicting the rotation angle or cropping two patches of the image, and predicting their relative tasks - or a combination of several such objectives.

2) Has the ability to align the representation of how it senses the world, with that of the video - allowing it to learn diverse tasks from video clips.

3) Has the ability to reproduce a skill, from only a single demonstration - using applied techniques from imitation learning

[1] https://www.cse.iitb.ac.in/~shivaram/papers/ks_adprl_2011.pdf

[2] https://ai.google/research/pubs/pub44806

[3] https://openai.com/five/

[4] https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/

[5] http://cs231n.stanford.edu/reports/2017/pdfs/614.pdf

[6] https://arxiv.org/pdf/1709.07174.pdf

[7] https://en.wikipedia.org/wiki/Motion_capture

[8] https://arxiv.org/pdf/1704.06888v3.pdf

[9] https://bair.berkeley.edu/blog/2018/06/28/daml/

[10] https://arxiv.org/pdf/1805.11592v2.pdf


Outline/Structure of the Case Study

  • (5 Minutes) Demo and explanation of the problem statement
  • The next 35 minutes will focus on covering the prerequisite of the framework, and then the framework itself:
  • (15 Minutes) A practical introduction to Imitation Learning and Reinforcement learning
  • (10 Minutes) A practical introduction to learning representations
  • (15 Minutes) Technical deep dive into the framework
  • (5 Minutes) Conclusion and Q&A

Learning Outcome

  • Theory and practical know-how on:
  • - Self Supervised Learning
  • - Imitation Learning
  • - Reinforcement Learning
  • Along with code examples

Target Audience

We expect this session to be highly relevant for researchers & practitioners who work with deep learning and/or reinforcement learning and track the latest research trends. It is also relevant for those working in settings with a large about of readily available unlabeled data, but hard to obtain labels. It will also serve as practical introduction to reinforcement learning.

Prerequisites for Attendees

Basic knowledge of machine learning and familiarity with deep learning.



schedule Submitted 2 years ago

  • Dr. Dakshinamurthy V Kolluru

    Dr. Dakshinamurthy V Kolluru - Understanding Text: An exciting journey from Probabilistic Models to Neural Networks

    45 Mins

    We will trace the journey of NLP over the past 50 odd years. We will cover chronologically Hidden Markov Models, Elman networks, Conditional Random Fields, LSTMs, Word2Vec, Encoder-Decoder models, Attention models, transfer learning in text and finally transformer architectures. Our emphasis is going to be on how the models became powerful and simple to implement simultaneously. To demonstrate this, we take a few case studies solved at INSOFE with a primary goal of retaining accuracy while simplifying engineering. Traditional methods will be compared and contrasted against modern models and show how the latest models actually are becoming easier to implement by the business. We also explain how this enhanced comfort with text data is paving way for state of the art inclusive architectures

  • Yogesh H. Kulkarni

    Yogesh H. Kulkarni - MidcurveNN: Encoder-Decoder Neural Network for Computing Midcurve of a Thin Polygon

    45 Mins

    Various applications need lower dimensional representation of shapes. Midcurve is one- dimensional(1D) representation of a two-dimensional (2D) planar shape. It is used in applications such as animation, shape matching, retrieval, finite element analysis, etc. Methods available to compute midcurves vary based on the type of the input shape (images, sketches, etc.) and processing approaches such as Thinning, Medial Axis Transform (MAT), Chordal Axis Transform (CAT), Straight Skeletons, etc., all of which are rule-based.

    This presentation talks about a novel method called MidcurveNN which uses Encoder-Decoder neural network for computing midcurve from images of 2D thin polygons in supervised learning manner. This dimension reduction transformation from input 2D thin polygon image to output 1D midcurve image is learnt by the neural network, which can then be used to compute midcurve of an unseen 2D thin polygonal shape.

  • Avishkar Gupta

    Avishkar Gupta / Dipanjan Sarkar - Leveraging AI to Enhance Developer Productivity & Confidence

    45 Mins

    A major approach to the application of AI is leveraging it to create a safer world around us, as well as that of helping people make choices. With the open source revolution having taken the world by a storm and developers relying on various upstream third party dependencies (too many to chose from!:http://www.modulecounts.com/) to develop applications moving petabytes of sensitive data and mission critical code that can lead to disastrous failures, it is required now more than ever to build better developer tooling to help developers make safer, better choices in terms of their dependencies as well as providing them with more insights around the code they are using. Thanks to deep learning, we are able to tackle these complex problems and this talk would be covering two diverse and interesting problems we have been trying to solve leveraging deep learning models (recommenders and NLP).

    Though we are data scientists, at heart we are also developers building intelligent systems powered by AI. We, the Redhat developer group through our “Dependency Analytics” platform and extension, seek to do the same. We call this, 'AI-based insights for developers by developers'!

    In this session we would be going into the details of the deep learning models we have implemented and deployed to solve two major problems:

    1. Dependency Recommendations: Recommend dependencies to a user for their specific application stack by trying to guess their intent by leveraging deep learning based recommender models.
    2. Pro-active Security and Vulnerability Analysis: We would also touch upon how our platform aims to make developer applications safer by way of CVE (Common Vulnerabilities and Exposures) analyses and the experimental deep learning models we have built to proactively identify potential vulnerabilities. We will talk about how we leveraged deep learning models for NLP to tackle this problem.

    This shall be followed by a short architectural overview of the entire platform.

    If we have enough time, we intend to showcase some sample code as a part of a tutorial of how we built these deep learning models and do a walkthrough of the same!

  • Dr. Om Deshmukh

    Dr. Om Deshmukh - Key Principles to Succeed in Data Science

    90 Mins

    Building a successful career in the field of data science needs a lot more than just a thorough understanding of the various machine learning models. One has to also undergo a paradigm shift with regards to how s/he would typically approach any technical problems. In particular, patterns and insights unearthed from the data analysis have to be the guiding North Star for the next best action rather than the path of action implied by the data scientist's or his/her superior's intuition alone. One of the things that makes this shift tricker, in reality, is the 'confirmation bias': Confirmation bias is defined as a cognitive bias to interpret information in such a way that it further’s our pre-existing notions.

    In this session, we will discuss how the seemingly disjoint components of the digital ecosystem are working in tandem to make data-driven decisioning central to every functional aspect of every business vertical. This centrality accorded to the data makes it imperative that

    • (a) the data integrity is maintained across the lifetime of the data,
    • (b) the insights generated from the data are interpreted in the holistic context of the sources of the data and the data processing techniques, and
    • (c) human experts are systematically given an opportunity to overwrite any purely-data-driven-decisions, especially when such decisions may have far-reaching consequences.

    We will discuss these aspects using three case studies from three different business verticals (financial sector, logistics sector and the third one selected by popular vote). For each of these three case studies, the "traditional" way of solving the problem will be contrasted with the data-driven approach of solving. The participants will be split into three groups and each group will be asked to present the best data-driven approaches to solve one of the case studies. The other two groups can critique the presentation/approach. The winning group will be picked based on the presentation and the proposed approach.

    At the end of the session, the attendees should be able to work through any new case study to

    • (a) translate a business problem into an appropriate data-driven problem,
    • (b) formulate strategies to capture and access relevant data,
    • (c) shortlist relevant data modelling techniques to unearth the hidden patterns, and
    • (d) tie back the value of the findings to the business problem.
  • Dr. Atul Singh

    Dr. Atul Singh - Endow the gift of eloquence to your NLP applications using pre-trained word embeddings

    45 Mins

    Word embeddings are the plinth stones of Natural Language Processing (NLP) applications, used to transform human language into vectors that can be understood and processed by machine learning algorithms. Pre-trained word embeddings enable transfer of prior knowledge about the human language into a new application thereby enabling rapid creation of a scalable and efficient NLP applications. Since the emergence of word2vec in 2013, the word embeddings field has seen rapid developments by leaps and bounds with each new successive word embedding outperforming the prior one.

    The goal of this talk is to demonstrate the efficacy of using pre-trained word embedding to create scalable and robust NLP applications, and to explain to the audience the underlying theory of word embeddings that makes it possible. The talk will cover prominent word vector embeddings such as BERT and ELMo from the recent literature.

  • Karthik Bharadwaj T

    Karthik Bharadwaj T - Failure Detection using Driver Behaviour from Telematics

    Karthik Bharadwaj T
    Karthik Bharadwaj T
    Sr. Data Scientist
    schedule 2 years ago
    Sold Out!
    45 Mins
    Case Study

    Telematics data have a potential to unlock revenue of 1.5 trillion. Unfortunately this data has not been tapped by many users.

    In this case study Karthik Thirumalai would discuss how we can use telematics data to identify driver behaviour and do preventive maintenance in automobile.


    SUDIPTO PAL - Use cases of Financial Data Science Techniques in retail

    Walmart Labs
    schedule 2 years ago
    Sold Out!
    20 Mins

    Financial domains like Insurance and Banking have uncertainty itself as an inherent product feature, and hence makes extensive use of Statistical models to develop, valuate and price their products. This presentation will showcase some of the techniques like Survival models and cashflow prediction models, popularly used in financial products, how can they be used in Retail data science, by showcasing analogies and similarities.

    Survival models were traditionally used for modeling mortality, then got extended to be used for modeling queues, waiting time and attrition. We showcase, 1) How the waiting time aspect can be used to model repeat purchase behaviors of customers, and utilize the same for product recommendation on particular time intervals. 2) How the same survival or waiting time problem can be solved using discrete time binary response survival models (as opposed to traditional proportional hazard and AFT models for survival). 3) Quick coverage of other use cases like attrition, CLTV (customer lifetime value) and inventory management.

    We show a use case where survival models can be used to predict the timing of events (e.g. attrition/renewal, purchase, purchase order for procurement), and use that to predict the timing of cashflows associated with events (e.g. subscription fee received from renewals, procurement cost etc.), which are typically used for capital allocation.

    We also show how the backdated predicted cashflows can be used as baseline to make causal inference about strategic intervention (e.g. campaign launch for containing attritions) by comparing with actual cashflows post-intervention. This can be used to retrospectively evaluate the impact of strategic interventions.

  • Vishnu Murali

    Vishnu Murali - Deep learning for predictive maintenance : Towards Industry 4.0

    45 Mins

    Why Industry 4.0 matters?

    Just 13 % of organizations have attained the complete effect in their digital investments, so empowering them is in demand to have financial upside and make digital expansion. The optimal combination of analytics/deep learning with IoT can save large and SME’s around $16 billion.

    What’s predictive maintenance (PdM) of Industrial physical assets?

    This is a online-monitoring system which requires hardware and software components, including condition monitoring sensors, gateways and modules to handle data processing and transmission, and a secured cloud server to handle data storage and data analytics.

    Why is this important to Industries?

    Cost, safety, availability, and reliability are the main reasons why key industrial players are investing in predictive maintenance. Predictive maintenance allows factories to monitor the condition of in-service equipment by measuring key parameters like vibration, temperature, pressure, and current. Such monitoring requires connected smart sensors featuring a high-speed signal chain, powerful processing, and wired and/or wireless connectivity.


    Considering the above sections, as in the case of any machine learning implementations, there are hidden and underlying challenges involved in implementing PdM for industries.

    To tackle this, our research group has come up with focused solution to seamlessly integrate machine learning algorithms and industrial IoT platform. The real challenge is twofold. Apart from the technical trials, this is more of a need for agreement among plant engineers and research community.

    Ambitious foresight

    • To bring awareness among engineers about industry 4.0
    • To have technically sound way of implementing PdM
    • Providing deliverables and have ROI

    Keywords: Predictive maintenance, Industry 4.0, Behavioral change

  • Samiran Roy

    Samiran Roy / Shibsankar Das - Semi-Supervised Insight generation from petabyte scale Text data

    45 Mins
    Case Study

    Existing state-of-the-art supervised methods in Machine Learning require large amounts of annotated data to achieve good performance and generalization. However, manually constructing such a training data set with sentiment labels is a labor-intensive and time-consuming task. With the proliferation of data acquisition in domains such as images, text and video, the rate at which we acquire data is greater than the rate at which we can label them. Techniques that reduce the amount of labelled data needed to achieve competitive accuracies are of paramount importance for deploying scalable, data-driven, real-world solutions. Semi-Supervised Learning algorithms generally provide a way of learning about the structure of the data from the unlabelled examples, alleviating the need for labels.

    At Envestnet | Yodlee, we have deployed several advanced state-of-the-art Machine Learning solutions which process millions of data points on a daily basis with very stringent service level commitments. A key aspect of our Natural Language Processing solutions is Semi-supervised learning (SSL): A family of methods that also make use of unlabelled data for training – typically a small amount of labelled data with a large amount of unlabelled data. Pure supervised solutions fail to exploit the rich syntactic structure of the unlabelled data to improve decision boundaries.

    There is an abundance published work in the field - but few papers have succeeded in showing significantly better results than state-of-the-art supervised learning. Often, methods have simplifying assumptions that fail to transfer to real-world scenarios. There is a lack of practical guidelines for deploying effective SSL solutions. We attempt to bridge that gap by sharing our learning from successful SSL models deployed in production.

    We will talk about best practices and challenges in deploying SSL solutions in NLP - We shall cover:

    1. Our findings while working on SSL.
    2. Techniques which have worked for us, and which have not
    3. Which SSL method is suitable to solve a given use-case.
    4. How to deal with different distributions for labelled and unlabelled data
    5. How to quantify the effectiveness of each point in our training data
    6. How to build a feedback loop that chooses points for training that result in the greatest accuracy boosts and
    7. The effect of relative sizes of labelled and unlabelled data


    [1] https://arxiv.org/pdf/1804.09170.pdf

    [2] http://www.acad.bg/ebook/ml/MITPress-%20SemiSupervised%20Learning.pdf

    [3] https://github.com/brain-research/realistic-ssl-evaluation

    [4] https://arxiv.org/pdf/1511.01432.pdf

    [5] http://pages.cs.wisc.edu/~jerryzhu/pub/sslicml07.pdf