“Time is precious so is Time Series Analysis”

Time series analysis has been around for centuries helping us to solve from astronomical problems to business problems and advanced scientific research around us now. Time stores precious information, which most machine learning algorithms don’t deal with. But time series analysis, which is a mix of machine learning and statistics helps us to get useful insights. Time series can be applied to various fields like economy forecasting, budgetary analysis, sales forecasting, census analysis and much more. In this workshop, We will look at how to dive deep into time series data and make use of deep learning to make accurate predictions.

Structure of the workshop goes like this

  • Basics of time series analysis
  • Understanding Time series data with pandas
  • Preprocessing Time Series data
  • Classical Time series models (AR, MA, ARMA, ARIMA, SARIMA, GARCH, E-GARCH)
  • Forecasting with MLP (Multi-Layer Perceptron)
  • Forecasting with RNN (Recurrent Neural Network)
  • Forecasting with LSTM (Long Short Term Memory Network)
  • Understanding Financial Time Series data and forecasting with RNN and LSTM
  • Boosting techniques in Time series data
  • Developing intuition to choose the right network.
  • Dealing with large scale Time Series data

Libraries Used:

  • Keras (with Tensorflow backend)
  • matplotlib
  • pandas
  • statsmodels
  • prophet
  • pyflux
  • tsfresh
10 favorite thumb_down thumb_up 11 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist

Outline/Structure of the Workshop

1. Introduction to Time series analysis (15 mins)

2. Manipulating time series data using Pandas (30 mins)

3. Time Series exploratory analysis tools (30 mins)

4. Forecast Time series data with some classical method (AR, MA, ARMA, ARIMA, GARCH, E-GARCH) (120 mins)

5. Introduction to deep learning (20 mins)

6. Time series forecasting using MLP, RNN, LSTM (90 mins)

7. Financial Time Series data - (45 Mins)

8. Boosting Techniques - (25 mins)

9. Dealing with Large scale data - (60 mins)

Learning Outcome

- Using Pandas for time series data

- Using classical models for time series forecasting

- Using deep learning for time series forecasting

Target Audience

Audience with interest in time series analysis with a basic understanding of the math behind

Prerequisites for Attendees

Basics of Python

Basics of Time series analysis

Basics of Pandas & Deep learning

The following python packages need to be installed in the laptops of the attendees.

- Pandas

- Keras

- statsmodels

- matplotlib

- Prophet

- Pyflux

- tsfresh

schedule Submitted 2 months ago

Public Feedback

comment Suggest improvements to the Speaker
  • Vishal Gokhale
    By Vishal Gokhale  ~  1 day ago
    reply Reply
    Hi, I just saw that the outline elements add up to slightly over 5 and a half hrs. The full day slots (8 hrs) would be scheduled pre / post conference and from a logistical standpoint works better if we are having a full 8 hr workshop. But we also have 90 mins slots on the conference days. Do you want to make it more focused on specific topics and conduct a 90 mins hands-on workshop which we can be scheduled as a part of the conference (and not pre/post conference) ?
    • Ramanathan R
      By Ramanathan R  ~  1 day ago
      reply Reply

      Thanks for the comment Vishal.

      I am sorry - the proposal was not updated with information based on suggestions from previous comments.


      1. In the classical time series models, based on what DIpanjan suggested, we will include multivariate time series handling as well.  I have increased the time there.

      2. In the financial time series, we plan to include the idea of creating a time series based algorithmic strategy and back-testing it. It also includes combining structured data (say stock prices) with unstructured data (news feeds) and performing a prediction on that.  I had to increase the time there as well.

      3. In dealing with large scale data, we would like to have some hands-on tasks on these tools (earlier we planned to have just an intro).  So that takes some more time and I have increased the time there.


      Apart from this, we have some buffer which would be required for setup, questions and audience queries.  Hope that is fine.


      Having that said - if you insist on converting this to 90 min workshop, we will work that out.  It has one of the three following topics.

      1.  Deep learning on time series data

      2. Time Series Forecasting using Python (covering classical methods)

      3. Engineering Time series data (covering data processing and dealing with large scale data).  

      Please let us know your thoughts.




  • Naresh Jain
    By Naresh Jain  ~  3 days ago
    reply Reply

    Hi Ramanathan, thank you for the workshop.

    There are 2 speakers on this workshop proposal. Can you please help us understand, how each of you plan to contribute? Are you splitting topics between yourself or one speaker will be the main driver and the other speaker will assist the attendees?

    • Ramanathan R
      By Ramanathan R  ~  3 days ago
      reply Reply
      Thanks for the question Naresh. Prudhvi will be covering the data processing using Pandas, MLP and RNN. I will be covering the other parts of the workshop. The non-presenter will also help the attendees as required. Hope that clarifies.
      • Naresh Jain
        By Naresh Jain  ~  3 days ago
        reply Reply

        Thanks for the prompt response. I was looking for Prudhvi's videos online. Could not find any. Prudhvi can you please share any video from your past presentation. The PC would like to look at your presentation style.

  • Dipanjan Sarkar
    By Dipanjan Sarkar  ~  1 month ago
    reply Reply

    The workshop outline definitely looks to have a comprehensive coverage of time series analysis methods.

    Would it be possible to also briefly cover how to tackle multi-variate time series data and models which can help there? and also maybe (can we some how combine unstructured data with time series to aid in forecasting e.g. news events + stock price time series data). The latter one doesn't have to be full fledged demo but maybe covering some aspects around how these can be tackled can be really helpful to the attendees maybe?

    • Ramanathan R
      By Ramanathan R  ~  1 month ago
      reply Reply

      Thanks a lot for the feedback, Dipanjan.  Both points are very interesting. 

      We can cover the multivariate time series data.  We can plan some 15-20 mins for the same. 

      Regarding combining unstructured data with structured time series data - we intend to cover with a basic example of combining a news stream with real-time stock data in the "7. Financial Time Series data" section.  This is intended to be basic so as to draw the idea home. Hope it suffices.



      • Dipanjan Sarkar
        By Dipanjan Sarkar  ~  4 weeks ago
        reply Reply

        Yes, this makes perfect sense thanks!

  • Liked Dr. Vikas Agrawal

    Dr. Vikas Agrawal - Non-Stationary Time Series: Finding Relationships Between Changing Processes for Enterprise Prescriptive Systems

    45 Mins

    It is too tedious to keep on asking questions, seek explanations or set thresholds for trends or anomalies. Why not find problems before they happen, find explanations for the glitches and suggest shortest paths to fixing them? Businesses are always changing along with their competitive environment and processes. No static model can handle that. Using dynamic models that find time-delayed interactions between multiple time series, we need to make proactive forecasts of anomalous trends of risks and opportunities in operations, sales, revenue and personnel, based on multiple factors influencing each other over time. We need to know how to set what is “normal” and determine when the business processes from six months ago do not apply any more, or only applies to 35% of the cases today, while explaining the causes of risk and sources of opportunity, their relative directions and magnitude, in the context of the decision-making and transactional applications, using state-of-the-art techniques.

    Real world processes and businesses keeps changing, with one moving part changing another over time. Can we capture these changing relationships? Can we use multiple variables to find risks on key interesting ones? We will take a fun journey culminating in the most recent developments in the field. What methods work well and which break? What can we use in practice?

    For instance, we can show a CEO that they would miss their revenue target by over 6% for the quarter, and tell us why i.e. in what ways has their business changed over the last year. Then we provide the prioritized ordered lists of quickest, cheapest and least risky paths to help turn them over the tide, with estimates of relative costs and expected probability of success.

  • Liked Favio Vázquez

    Favio Vázquez - Complete Data Science Workflows with Open Source Tools

    90 Mins

    Cleaning, preparing , transforming, exploring data and modeling it's what we hear all the time about data science, and these steps maybe the most important ones. But that's not the only thing about data science, in this talk you will learn how the combination of Apache Spark, Optimus, the Python ecosystem and Data Operations can form a whole framework for data science that will allow you and your company to go further, and beyond common sense and intuition to solve complex business problems.

  • Liked Dipanjan Sarkar

    Dipanjan Sarkar - Explainable Artificial Intelligence - Demystifying the Hype

    Dipanjan Sarkar
    Dipanjan Sarkar
    Data Scientist
    Red Hat
    schedule 4 months ago
    Sold Out!
    45 Mins

    The field of Artificial Intelligence powered by Machine Learning and Deep Learning has gone through some phenomenal changes over the last decade. Starting off as just a pure academic and research-oriented domain, we have seen widespread industry adoption across diverse domains including retail, technology, healthcare, science and many more. More than often, the standard toolbox of machine learning, statistical or deep learning models remain the same. New models do come into existence like Capsule Networks, but industry adoption of the same usually takes several years. Hence, in the industry, the main focus of data science or machine learning is more ‘applied’ rather than theoretical and effective application of these models on the right data to solve complex real-world problems is of paramount importance.

    A machine learning or deep learning model by itself consists of an algorithm which tries to learn latent patterns and relationships from data without hard-coding fixed rules. Hence, explaining how a model works to the business always poses its own set of challenges. There are some domains in the industry especially in the world of finance like insurance or banking where data scientists often end up having to use more traditional machine learning models (linear or tree-based). The reason being that model interpretability is very important for the business to explain each and every decision being taken by the model.However, this often leads to a sacrifice in performance. This is where complex models like ensembles and neural networks typically give us better and more accurate performance (since true relationships are rarely linear in nature).We, however, end up being unable to have proper interpretations for model decisions.

    To address and talk about these gaps, I will take a conceptual yet hands-on approach where we will explore some of these challenges in-depth about explainable artificial intelligence (XAI) and human interpretable machine learning and even showcase with some examples using state-of-the-art model interpretation frameworks in Python!

  • Liked Dat Tran

    Dat Tran - Image ATM - Image Classification for Everyone

    Dat Tran
    Dat Tran
    Head of AI
    Axel Springer AI
    schedule 3 months ago
    Sold Out!
    45 Mins

    At idealo.de we store and display millions of images. Our gallery contains pictures of all sorts. You’ll find there vacuum cleaners, bike helmets as well as hotel rooms. Working with huge volume of images brings some challenges: How to organize the galleries? What exactly is in there? Do we actually need all of it?

    To tackle these problems you first need to label all the pictures. In 2018 our Data Science team completed four projects in the area of image classification. In 2019 there were many more to come. Therefore, we decided to automate this process by creating a software we called Image ATM (Automated Tagging Machine). With the help of transfer learning, Image ATM enables the user to train a Deep Learning model without knowledge or experience in the area of Machine Learning. All you need is data and spare couple of minutes!

    In this talk we will discuss the state-of-art technologies available for image classification and present Image ATM in the context of these technologies. We will then give a crash course of our product where we will guide you through different ways of using it - in shell, on Jupyter Notebook and on the Cloud. We will also talk about our roadmap for Image ATM.

  • Liked Dipanjan Sarkar

    Dipanjan Sarkar / Anuj Gupta - A Hands-on Introduction to Natural Language Processing

    480 Mins

    Data is the new oil and unstructured data, especially text, images and videos contain a wealth of information. However, due to the inherent complexity in processing and analyzing this data, people often refrain from spending extra time and effort in venturing out from structured datasets to analyze these unstructured sources of data, which can be a potential gold mine. Natural Language Processing (NLP) is all about leveraging tools, techniques and algorithms to process and understand natural language-based data, which is usually unstructured like text, speech and so on. In this workshop, we will be looking at tried and tested strategies, techniques and workflows which can be leveraged by practitioners and data scientists to extract useful insights from text data.

    Being specialized in domains like computer vision and natural language processing is no longer a luxury but a necessity which is expected of any data scientist in today’s fast-paced world! With a hands-on and interactive approach, we will understand essential concepts in NLP along with extensive case- studies and hands-on examples to master state-of-the-art tools, techniques and frameworks for actually applying NLP to solve real- world problems. We leverage Python 3 and the latest and best state-of- the-art frameworks including NLTK, Gensim, SpaCy, Scikit-Learn, TextBlob, Keras and TensorFlow to showcase our examples.

    In my journey in this field so far, I have struggled with various problems, faced many challenges, and learned various lessons over time. This workshop will contain a major chunk of the knowledge I’ve gained in the world of text analytics and natural language processing, where building a fancy word cloud from a bunch of text documents is not enough anymore. Perhaps the biggest problem with regard to learning text analytics is not a lack of information but too much information, often called information overload. There are so many resources, documentation, papers, books, and journals containing so much content that they often overwhelm someone new to the field. You might have had questions like ‘What is the right technique to solve a problem?’, ‘How does text summarization really work?’ and ‘Which are the best frameworks to solve multi-class text categorization?’ among many other questions! Based on my prior knowledge and learnings from publishing a couple of books in this domain, this workshop should help readers avoid the pressing issues I’ve faced in my journey so far and learn the strategies to master NLP.

    This workshop follows a comprehensive and structured approach. First it tackles the basics of natural language understanding and Python for handling text data in the initial chapters. Once you’re familiar with the basics, we cover text processing, parsing and understanding. Then, we address interesting problems in text analytics in each of the remaining chapters, including text classification, clustering and similarity analysis, text summarization and topic models, semantic analysis and named entity recognition, sentiment analysis and model interpretation. The last chapter is an interesting chapter on the recent advancements made in NLP thanks to deep learning and transfer learning and we cover an example of text classification with universal sentence embeddings.

  • 45 Mins

    Artificial Intelligence (AI) has been rapidly adopted in various spheres of medicine such as microbiological analysis, discovery of drug, disease diagnosis, Genomics, medical imaging and bioinformatics for translating biomedical data into improved human healthcare. Automation in healthcare using machine learning/deep learning assists physicians to make faster, cheaper and more accurate diagnoses.

    We have completed three healthcare projects using deep learning and are currently working on three more healthcare projects. In this session, we shall demonstrate two deep learning based healthcare applications developed using TensorFlow. The discussion of each application will include the following: problem statement, proposed solution, data collected, experimental analysis and challenges faced to achieve this success. Finally, we will briefly discuss the other applications on which we are currently working and the future scope of research in this area.

  • Liked Dr. Atul Singh

    Dr. Atul Singh - Endow the gift of eloquence to your NLP applications using pre-trained word embeddings

    45 Mins

    Word embeddings are the plinth stones of Natural Language Processing (NLP) applications, used to transform human language into vectors that can be understood and processed by machine learning algorithms. Pre-trained word embeddings enable transfer of prior knowledge about the human language into a new application thereby enabling rapid creation of a scalable and efficient NLP applications. Since the emergence of word2vec in 2013, the word embeddings field has seen rapid developments by leaps and bounds with each new successive word embedding outperforming the prior one.

    The goal of this talk is to demonstrate the efficacy of using pre-trained word embedding to create scalable and robust NLP applications, and to explain to the audience the underlying theory of word embeddings that makes it possible. The talk will cover prominent word vector embeddings such as BERT and ELMo from the recent literature.

  • Liked Suvro Shankar Ghosh

    Suvro Shankar Ghosh - Real-Time Advertising Based On Web Browsing In Telecom Domain

    45 Mins
    Case Study

    The following section describes Telco Domain Real-time advertising based on browsing use case in terms of :

    • Potential business benefits to earn.
    • Functional use case architecture depicted.
    • Data sources (attributes required).
    • Analytic to be performed,
    • Output to be provided and target systems to be integrated with.

    This use case is part of the monetization category. The goal of the use case is to provide a kind of DataMart to either Telecom business parties or external third parties sufficient, relevant and customized information to produce real-time advertising to Telecom end users. The customer targets are all Telecom network end-users.

    The customization information to be delivered to advertise are based on several dimensions:

    • Customer characteristics: demographic, telco profile.
    • Customer usage: Telco products or any other interests.
    • Customer time/space identification: location, zoning areas, usage time windows.

    Use case requirements are detailed in the description below as “ Targeting method”

    1. Search Engine Targeting:

    The telco will use users web history to track what users are looking at and to gather information about them. When a user goes onto a website, their web browsing history will show information of the user, what he or she searched, where they are from, found by the ip address, and then build a profile around them, allowing Telco to easily target ads to the user more specifically.

    1. Content and Contextual Targeting:

    This is when advertisers can put ads in a specific place, based on the relative content present. This targeting method can be used across different mediums, for example in an article online, about purchasing homes would have an advert associated with this context, like an insurance ad. This is achieved through an ad matching system which analyses the contents on a page or finds keywords and presents a relevant advert, sometimes through pop-ups.

    1. Technical Targeting

    This form of targeting is associated with the user’s own software or hardware status. The advertisement is altered depending on the user’s available network bandwidth, for example if a user is on their mobile phone that has a limited connection, the ad delivery system will display a version of the ad that is smaller for a faster data transfer rate.

    1. Time Targeting:

    This type of targeting is centered around time and focuses on the idea of fitting in around people’s everyday lifestyles. For example, scheduling specific ads at a timeframe from 5-7pm, when the

    1. Sociodemographic Targeting:

    This form of targeting focuses on the characteristics of consumers, including their age, gender, and nationality. The idea is to target users specifically, using this data about them collected, for example, targeting a male in the age bracket of 18-24. The telco will use this form of targeting by showing advertisements relevant to the user’s individual demographic profile. this can show up in forms of banner ads, or commercial videos.

    1. Geographical and Location-Based Targeting:

    This type of advertising involves targeting different users based on their geographic location. IP addresses can signal the location of a user and can usually transfer the location through different cells.

    1. Behavioral Targeting:

    This form of targeted advertising is centered around the activity/actions of users and is more easily achieved on web pages. Information from browsing websites can be collected, which finds patterns in users search history.

    1. Retargeting:

    Is where advertising uses behavioral targeting to produce ads that follow you after you have looked or purchased are a particular item. Retargeting is where advertisers use this information to ‘follow you’ and try and grab your attention so you do not forget.

    1. Opinions, attitudes, interests, and hobbies:

    Psychographic segmentation also includes opinions on gender and politics, sporting and recreational activities, views on the environment and arts and cultural issues.

  • Liked Shalini Sinha

    Shalini Sinha / Ashok J / Yogesh Padmanaban - Hybrid Classification Model with Topic Modelling and LSTM Text Classifier to identify key drivers behind Incident Volume

    45 Mins
    Case Study

    Incident volume reduction is one of the top priorities for any large-scale service organization along with timely resolution of incidents within the specified SLA parameters. AI and Machine learning solutions can help IT service desk manage the Incident influx as well as resolution cost by

    • Identifying major topics from incident description and planning resource allocation and skill-sets accordingly
    • Producing knowledge articles and resolution summary of similar incidents raised earlier
    • Analyzing Root Causes of incidents and introducing processes and automation framework to predict and resolve them proactively

    We will look at different approaches to combine standard document clustering algorithms such as Latent Dirichlet Allocation (LDA) and K-mean clustering on doc2vec along-with Text classification to produce easily interpret-able document clusters with semantically coherent/ text representation that helped IT operations of a large FMCG client identify key drivers/topics contributing towards incident volume and take necessary action on it.

  • Liked Antrixsh Gupta

    Antrixsh Gupta - Creating Custom Interactive Data Visualization Dashboards with Bokeh

    90 Mins

    This will be a hands-on workshop how to build a custom interactive dashboard application on your local machine or on any cloud service provider. You will also learn how to deploy this application with both security and scalability in mind.

    Powerful Data visualization software solutions are extremely useful when building interactive data visualization dashboards. However, these types of solutions might not provide sufficient customization options. For those scenarios, you can use open source libraries like D3.js, Chart.js, or Bokeh to create custom dashboards. While these libraries offer a lot of flexibility for building dashboards with tailored features and visualizations.

  • Liked Pankaj Kumar

    Pankaj Kumar / Abinash Panda / Usha Rengaraju - Quantitative Finance :Global macro trading strategy using Probabilistic Graphical Models

    90 Mins

    { This is a handson workshop in pgmpy package. The creator of pgmpy package Abinash Panda will do the code demonstration }

    Crude oil plays an important role in the macroeconomic stability and it heavily influences the performance of the global financial markets. Unexpected fluctuations in the real price of crude oil are detrimental to the welfare of both oil-importing and oil-exporting economies.Global macro hedge-funds view forecast the price of oil as one of the key variables in generating macroeconomic projections and it also plays an important role for policy makers in predicting recessions.

    Probabilistic Graphical Models can help in improving the accuracy of existing quantitative models for crude oil price prediction as it takes in to account many different macroeconomic and geopolitical variables .

    Hidden Markov Models are used to detect underlying regimes of the time-series data by discretising the continuous time-series data. In this workshop we use Baum-Welch algorithm for learning the HMMs, and Viterbi Algorithm to find the sequence of hidden states (i.e. the regimes) given the observed states (i.e. monthly differences) of the time-series.

    Belief Networks are used to analyse the probability of a regime in the Crude Oil given the evidence as a set of different regimes in the macroeconomic factors . Greedy Hill Climbing algorithm is used to learn the Belief Network, and the parameters are then learned using Bayesian Estimation using a K2 prior. Inference is then performed on the Belief Networks to obtain a forecast of the crude oil markets, and the forecast is tested on real data.

  • Liked Saikat Sarkar

    Saikat Sarkar / Dhanya Parameshwaran / Dr Sweta Choudhary / Raunak Bhandari / Srikanth Ramaswamy / Usha Rengaraju - AI meets Neuroscience

    480 Mins

    This is a mixer workshop with lot of clinicians , medical experts , Neuroimaging experts ,Neuroscientists, data scientists and statisticians will come under one roof to bring together this revolutionary workshop.

    The theme will be updated soon .

    Our celebrity and distinguished presenter Srikanth Ramaswamy who is an advisor at Mysuru Consulting Group and also works Blue Brain Project at the EPFL will be delivering an expert talk in the workshop.


    { This workshop will be a combination of panel discussions , expert talk and neuroimaging data science workshop ( applying machine learning and deep learning algorithms to Neuroimaging data sets}

    { We are currently onboarding several experts from Neuroscience domain --Neurosurgeons , Neuroscientists and Computational Neuroscientists .Details of the speakers will be released soon }

    Abstract for the Neuroimaging Data Science Part of the workshop:

    The study of the human brain with neuroimaging technologies is at the cusp of an exciting era of Big Data. Many data collection projects, such as the NIH-funded Human Connectome Project, have made large, high- quality datasets of human neuroimaging data freely available to researchers. These large data sets promise to provide important new insights about human brain structure and function, and to provide us the clues needed to address a variety of neurological and psychiatric disorders. However, neuroscience researchers still face substantial challenges in capitalizing on these data, because these Big Data require a different set of technical and theoretical tools than those that are required for analyzing traditional experimental data. These skills and ideas, collectively referred to as Data Science, include knowledge in computer science and software engineering, databases, machine learning and statistics, and data visualization.

    The workshop covers Data analysis, statistics and data visualization and applying cutting-edge analytics to complex and multimodal neuroimaging datasets . Topics which will be covered in this workshop are statistics, associative techniques, graph theoretical analysis, causal models, nonparametric inference, and meta-analytical synthesis.

  • Liked Raunak Bhandari

    Raunak Bhandari / Ankit Desai / Usha Rengaraju - Knowledge Graph from Natural Language: Incorporating order from textual chaos

    90 Mins


    What If I told you that instead of the age-old saying that "a picture is worth a thousand words", it could be that "a word is worth a thousand pictures"?

    Language evolved as an abstraction of distilled information observed and collected from the environment for sophisticated and efficient interpersonal communication and is responsible for humanity's ability to collaborate by storing and sharing experiences. Words represent evocative abstractions over information encoded in our memory and are a composition of many primitive information types.

    That is why language processing is a much more challenging domain and witnessed a delayed 'imagenet' moment.

    One of the cornerstone applications of natural language processing is to leverage the language's inherent structural properties to build a knowledge graph of the world.

    Knowledge Graphs

    Knowledge graph is a form of a rich knowledge base which represents information as an interconnected web of entities and their interactions with each other. This naturally manifests as a graph data structure, where nodes represent entities and the relationship between them are the edges.

    Automatically constructing and leveraging it in an intelligent system is an AI-hard problem, and an amalgamation of a wide variety of fields like natural language processing, information extraction and retrieval, graph algorithms, deep learning, etc.

    It represents a paradigm shift for artificial intelligence systems by going beyond deep learning driven pattern recognition and towards more sophisticated forms of intelligence rooted in reasoning to solve much more complicated tasks.

    To elucidate the differences between reasoning and pattern recognition: consider the problem of computer vision: the vision stack processes an image to detect shapes and patterns in order to identify objects - this is pattern recognition, whereas reasoning is much more complex - to associate detected objects with each other in order to meaningfully describe a scene. For this to be accomplished, a system needs to have a rich understanding of the entities within the scene and their relationships with each other.

    To understand a scene where a person is drinking a can of cola, a system needs to understand concepts like people, that they drink certain liquids via their mouths, liquids can be placed into metallic containers which can be held within a palm to be consumed, and the generational phenomenon that is cola, among others. A sophisticated vision system can then use this rich understanding to fetch details about cola in-order to alert the user of their calorie intake, or to update preferences for a customer. A Knowledge Graph's 'awareness' of the world phenomenons can thus be used to augment a vision system to facilitate such higher order semantic reasoning.

    In production systems though, reasoning may be cast into a pattern recognition problem by limiting the scope of the system for feasibility, but this may be insufficient as the complexity of the system scales or we try to solve general intelligence.

    Challenges in building a Knowledge Graph

    There are two primary challenges towards integrating knowledge graphs in systems: acquisition of knowledge and construction of the graph and effectively leveraging it with robust algorithms to solve reasoning tasks. Creation of the knowledge graph can vary widely depending on the breadth and complexity of the domain - from just manual curation to automatically constructing it by leveraging unstructured/semi-structured sources of knowledge, like books and Wikipedia.

    Many natural language processing tasks are precursors towards building knowledge graphs from unstructured text, like syntactic parsing, information extraction, entity linking, named entity recognition, relationship extraction, semantic parsing, semantic role labeling, entity disambiguation, etc. Open information extraction is an active area of research on extracting semantic triplets of object ('John'), predicate ('eats'), subject ('burger') from plain text, which are used to build the knowledge graph automatically.

    A very interesting approach to this problem is the extraction of frame semantics. Frame semantics relates linguistic semantics to encyclopedic knowledge and the basic idea is that the meaning of a word is linked to all essential knowledge that relates to it, for eg. to understand the word "sell", it's necessary to also know about commercial transactions, which involve a seller, buyer, goods, payment, and the relations between these, which can be represented in a knowledge graph.

    This workshop will focus on building such a knowledge graph from unstructured text.

    Learn good research practices like organizing code and modularizing output for productive data wrangling to improve algorithm performance.

    Knowledge Graph at Embibe

    We will showcase how Embibe's proprietary Knowledge Graph manifests and how it's leveraged across a multitude of projects in our Data Science Lab.

  • Liked Dr. C.S.Jyothirmayee

    Dr. C.S.Jyothirmayee / Usha Rengaraju / Vijayalakshmi Mahadevan - Deep learning powered Genomic Research

    90 Mins

    The event disease happens when there is a slip in the finely orchestrated dance between physiology, environment and genes. Treatment with chemicals (natural, synthetic or combination) solved some diseases but others persisted and got propagated along the generations. Molecular basis of disease became prime center of studies to understand and to analyze root cause. Cancer also showed a way that origin of disease, detection, prognosis and treatment along with cure was not so uncomplicated process. Treatment of diseases had to be done case by case basis (no one size fits).

    With the advent of next generation sequencing, high through put analysis, enhanced computing power and new aspirations with neural network to address this conundrum of complicated genetic elements (structure and function of various genes in our systems). This requires the genomic material extraction, their sequencing (automated system) and analysis to map the strings of As, Ts, Gs, and Cs which yields genomic dataset. These datasets are too large for traditional and applied statistical techniques. Consequently, the important signals are often incredibly small along with blaring technical noise. This further requires far more sophisticated analysis techniques. Artificial intelligence and deep learning gives us the power to draw clinically useful information from the genetic datasets obtained by sequencing.

    Precision of these analyses have become vital and way forward for disease detection, its predisposition, empowers medical authorities to make fair and situationally decision about patient treatment strategies. This kind of genomic profiling, prediction and mode of disease management is useful to tailoring FDA approved treatment strategies based on these molecular disease drivers and patient’s molecular makeup.

    Now, the present scenario encourages designing, developing, testing of medicine based on existing genetic insights and models. Deep learning models are helping to analyze and interpreting tiny genetic variations ( like SNPs – Single Nucleotide Polymorphisms) which result in unraveling of crucial cellular process like metabolism, DNA wear and tear. These models are also responsible in identifying disease like cancer risk signatures from various body fluids. They have the immense potential to revolutionize healthcare ecosystem. Clinical data collection is not streamlined and done in a haphazard manner and the requirement of data to be amenable to a uniform fetchable and possibility to be combined with genetic information would power the value, interpretation and decisive patient treatment modalities and their outcomes.

    There is hugh inflow of medical data from emerging human wearable technologies, along with other health data integrated with ability to do quickly carry out complex analyses on rich genomic databases over the cloud technologies … would revitalize disease fighting capability of humans. Last but still upcoming area of application in direct to consumer genomics (success of 23andMe).

    This road map promises an end-to-end system to face disease in its all forms and nature. Medical research, and its applications like gene therapies, gene editing technologies like CRISPR, molecular diagnostics and precision medicine could be revolutionized by tailoring a high-throughput computing method and its application to enhanced genomic datasets.

  • Liked Amit  Baldwa


    45 Mins

    Machine learning provides systems the ability to automatically learn and improve from experience without being explicitly programmed.

    Technical analysis shows in graphic form investor sentiment, both greed and fear. Technical analysis attempts to use past stock price and volume information to predict future price movements. Technical analysis of various indicators has been a time-tested strategy for seasoned traders and hedge funds, who have used these techniques to effective turn our profits in Securities Industry.

    Some researchers claim that stock prices conform to the theory of random walk, which is that the future path of the price of a stock is not more predictable than random numbers. However, Stock prices do not follow random walks.

    We will evaluate whether stock returns can be predicted based on historical information.

    Coupled with Machine Learning, we further try to decipher the correlation between the various indicators and identify the set of indicators which appropriately predict the value