Natural Language Processing Bootcamp - Zero to Hero
Data is the new oil and unstructured data, especially text, images and videos contain a wealth of information. However, due to the inherent complexity in processing and analyzing this data, people often refrain from spending extra time and effort in venturing out from structured datasets to analyze these unstructured sources of data, which can be a potential gold mine. Natural Language Processing (NLP) is all about leveraging tools, techniques and algorithms to process and understand natural language based unstructured data - text, speech and so on.
Being specialized in domains like computer vision and natural language processing is no longer a luxury but a necessity which is expected of any data scientist in today’s fast-paced world! With a hands-on and interactive approach, we will understand essential concepts in NLP along with extensive case- studies and hands-on examples to master state-of-the-art tools, techniques and frameworks for actually applying NLP to solve real- world problems. We leverage Python 3 and the latest and best state-of- the-art frameworks including NLTK, Gensim, SpaCy, Scikit-Learn, TextBlob, Keras and TensorFlow to showcase our examples. You will be able to learn a fair bit of machine learning as well as deep learning in the context of NLP during this bootcamp.
In our journey in this field, we have struggled with various problems, faced many challenges, and learned various lessons over time. This workshop is our way of giving back a major chunk of the knowledge we’ve gained in the world of text analytics and natural language processing, where building a fancy word cloud from a bunch of text documents is not enough anymore. You might have had questions like ‘What is the right technique to solve a problem?’, ‘How does text summarization really work?’ and ‘Which are the best frameworks to solve multi-class text categorization?’ among many other questions! Based on our prior knowledge and learnings from publishing a couple of books in this domain, this workshop should help readers avoid some of the pressing issues in NLP and learn effective strategies to master NLP.
The intent of this workshop is to make you a hero in NLP so that you can start applying NLP to solve real-world problems. We start from zero and follow a comprehensive and structured approach to make you learn all the essentials in NLP. We will be covering the following aspects during the course of this workshop with hands-on examples and projects!
- Basics of Natural Language and Python for NLP tasks
- Text Processing and Wrangling
- Text Understanding - POS, NER, Parsing
- Text Representation - BOW, Embeddings, Contextual Embeddings
- Text Similarity and Content Recommenders
- Text Clustering
- Topic Modeling
- Text Summarization
- Sentiment Analysis - Unsupervised & Supervised
- Text Classification with Machine Learning and Deep Learning
- Multi-class & Multi-Label Text Classification
- Deep Transfer Learning and it's promise
- Applying Deep Transfer Learning - Universal Sentence Encoders, ELMo and BERT for NLP tasks
- Generative Deep Learning for NLP
- Next Steps
With over 10 hands-on projects, the bootcamp will be packed with plenty of hands-on examples for you to go through, try out and practice and we will try to keep theory to a minimum considering the limited time we have and the amount of ground we want to cover. We hope at the end of this workshop you can takeaway some useful methodologies to apply for solving NLP problems in the future. We will be using Python to showcase all our examples.
Outline/Structure of the Workshop
The following is the rough structure of the workshop subject to some minor changes.
- Introduction to Natural Language Processing
- Python for NLP
- Text pre-processing and Wrangling
- Removing HTML tags\noise
- Removing accented characters
- Removing special characters\symbols
- Handling contractions
- Stemming
- Lemmatization
- Stop word removal
- Hands-on Project: Building a text pre-processor with multi-threading
- Text Understanding
- POS (Parts of Speech) Tagging
- Text Parsing (Shallow, Dependency, Constituency)
- NER (Named Entity Recognition) Tagging
- Hands-on Project: Build your own NER Tagger - Statistical Models & Deep Learning Models
- Text Representation – Feature Engineering
- Traditional Statistical Models – BOW, TF-IDF
- Newer Deep Learning Models for word embeddings – Word2Vec, GloVe, FastText
- Contextual word embeddings - ELMo, BERT
- Hands-on Project: Interactive exploration of Word Embeddings
- Hands-on Project: Similarity and Movie Recommendations with different text representations
- Hands-on Project: Sentiment Analysis using unsupervised learning & supervised learning
- Hands-on Project: Text Clustering of Movies
- Hands-on Project: Text Summarization Methods - Statistical & Deep Learning
- Hands-on Project: Topic Modeling - explore current research trends in AI
- Hands-on Project: Text Classification Models
- Traditional Machine Learning Models
- Deep Neural Nets
- Convolutional Neural Networks (CNNs)
- Long-Short Term Memory Networks (LSTMs)
- Bi-directional LSTMs \ GRUs
- Deep Transfer Learning Models
- Promise of Deep Transfer Learning for NLP
- Hands-on Project: Deep Transfer Learning with ELMo, BERT, Universal Sentence Embeddings
- Hands-on Project: Generative Deep Learning for NLP
- Conclusion and Next Steps
Learning Outcome
- Learn and understand popular NLP workflows with interactive examples
- Covers concepts and interactive projects on cleaning and handling noisy unstructured text data including duplicate checks, spelling corrections and text wrangling
- Build Parsers and NER taggers and parse text data to understand it better
- Understand, build and explore text semantics and representations with traditional statistical models and newer word embedding and contextual embedding models based on deep learning
- Projects on popular NLP tasks including text classification, sentiment analysis, text clustering, summarization, topic models and recommendations
- Recent state-of-the-art cutting edge research implementation on deep transfer learning and generative deep learning for NLP
- Learn and implement the latest and best in state-of-art-models in NLP including ELMo, BERT and so on.
- Learn best practices and robust methodologies for NLP with the entire codebase shared with the workshop participants to take home even after the workshop
- Over 10 Hands-on Projects showcasing the best in NLP
Target Audience
Data Scientists, Engineers, Developers, AI Enthusiasts, Linguistic Experts
Prerequisites for Attendees
Basic knowledge of Python and Machine Learning \ Deep Learning helps.
All the examples will be covered in Python.
Having a system with a GPU or access to a GPU helps since then you can run all the examples during the workshop itself. We will walkthrough everything anyway during the workshop.
Links
This is based on my popular book on NLP: https://github.com/dipanjanS/text-analytics-with-python
We will be adding in new content also around topic models, deep transfer learning for NLP also.
I write interesting content and you can check it out at: https://medium.com/@dipanzan.sarkar
Also presented in ODSC India 18: https://www.youtube.com/watch?v=2yRl-DEu0g0&t=2651s
schedule Submitted 4 years ago
People who liked this proposal, also liked:
-
keyboard_arrow_down
Viral B. Shah - Models as Code Differentiable Programming with Julia
45 Mins
Keynote
Intermediate
Since we originally proposed the need for a first-class language, compiler and ecosystem for machine learning (ML) - a view that is increasingly shared by many, there have been plenty of interesting developments in the field. Not only have the tradeoffs in existing systems, such as TensorFlow and PyTorch, not been resolved, but they are clearer than ever now that both frameworks contain distinct "static graph" and "eager execution" interfaces. Meanwhile, the idea of ML models fundamentally being differentiable algorithms – often called differentiable programming – has caught on.
Where current frameworks fall short, several exciting new projects have sprung up that dispense with graphs entirely, to bring differentiable programming to the mainstream. Myia, by the Theano team, differentiates and compiles a subset of Python to high-performance GPU code. Swift for TensorFlow extends Swift so that compatible functions can be compiled to TensorFlow graphs. And finally, the Flux ecosystem is extending Julia’s compiler with a number of ML-focused tools, including first-class gradients, just-in-time CUDA kernel compilation, automatic batching and support for new hardware such as TPUs.
This talk will demonstrate how Julia is increasingly becoming a natural language for machine learning, the kind of libraries and applications the Julia community is building, the contributions from India (there are many!), and our plans going forward.
-
keyboard_arrow_down
Kuldeep Jiwani - Sessionisation via stochastic periods for root event identification
45 Mins
Talk
Intermediate
In todays world majority of information is generated by self sustaining systems like various kinds of bots, crawlers, servers, various online services, etc. This information is flowing on the axis of time and is generated by these actors under some complex logic. For example, a stream of buy/sell order requests by an Order Gateway in financial world, or a stream of web requests by a monitoring / crawling service in the web world, or may be a hacker's bot sitting on internet and attacking various computers. Although we may not be able to know the motive or intention behind these data sources. But via some unsupervised techniques we can try to infer the pattern or correlate the events based on their multiple occurrences on the axis of time. Associating a chain of events in order of time helps in doing a root event analysis. In certain cases a time ordered correlation and root event identification is good enough to automatically identify signatures of various malicious actors and take appropriate corrective actions to stop cyber attacks, stop malicious social campaigns, etc.
Sessionisation is one such unsupervised technique that tries to find the signal in a stream of events associated with a timestamp. In the ideal world it would resolve to finding periods with a mixture of sinusoidal waves. But for the real world this is a much complex activity, as even the systematic events generated by machines over the internet behave in a much erratic manner. So the notion of a period for a signal also changes in the real world. We can no longer associate it with a number, it has to be treated as a random variable, with expected values and associated variance. Hence we need to model "Stochastic periods" and learn their probability distributions in an unsupervised manner.
The main focus of this talk will be to showcase applied data science techniques to discover stochastic periods. There are many ways to obtain periods in data, so the journey would begin by a walk through of existing techniques like FFT (Fast Fourier Transform) then discuss about Gaussian Mixture Models. After highlighting the short comings of these techniques we will succinctly explain one of the most general non-parametric Bayesian approaches to solve this problem. Without going too deep in the complex math, we will get back to applied data science and discuss a much simpler technique that can solve the same problem if certain assumptions are satisfied.
In this talk we will demonstrate some time based pattern we discovered while working on a security analytics use case that uses Sessionisation. In the talk we will demonstrate such patterns based on an open source malware attack datasets that is available publicly.
Key concepts explained in talk: Sessionisation, Bayesian techniques of Machine Learning, Gaussian Mixture Models, Kernel density estimation, FFT, stochastic periods, probabilistic modelling, Bayesian non-parametric methods
-
keyboard_arrow_down
Dat Tran - Image ATM - Image Classification for Everyone
45 Mins
Talk
Intermediate
At idealo.de we store and display millions of images. Our gallery contains pictures of all sorts. You’ll find there vacuum cleaners, bike helmets as well as hotel rooms. Working with huge volume of images brings some challenges: How to organize the galleries? What exactly is in there? Do we actually need all of it?
To tackle these problems you first need to label all the pictures. In 2018 our Data Science team completed four projects in the area of image classification. In 2019 there were many more to come. Therefore, we decided to automate this process by creating a software we called Image ATM (Automated Tagging Machine). With the help of transfer learning, Image ATM enables the user to train a Deep Learning model without knowledge or experience in the area of Machine Learning. All you need is data and spare couple of minutes!
In this talk we will discuss the state-of-art technologies available for image classification and present Image ATM in the context of these technologies. We will then give a crash course of our product where we will guide you through different ways of using it - in shell, on Jupyter Notebook and on the Cloud. We will also talk about our roadmap for Image ATM.
-
keyboard_arrow_down
Dr. Vikas Agrawal - Non-Stationary Time Series: Finding Relationships Between Changing Processes for Enterprise Prescriptive Systems
45 Mins
Talk
Intermediate
It is too tedious to keep on asking questions, seek explanations or set thresholds for trends or anomalies. Why not find problems before they happen, find explanations for the glitches and suggest shortest paths to fixing them? Businesses are always changing along with their competitive environment and processes. No static model can handle that. Using dynamic models that find time-delayed interactions between multiple time series, we need to make proactive forecasts of anomalous trends of risks and opportunities in operations, sales, revenue and personnel, based on multiple factors influencing each other over time. We need to know how to set what is “normal” and determine when the business processes from six months ago do not apply any more, or only applies to 35% of the cases today, while explaining the causes of risk and sources of opportunity, their relative directions and magnitude, in the context of the decision-making and transactional applications, using state-of-the-art techniques.
Real world processes and businesses keeps changing, with one moving part changing another over time. Can we capture these changing relationships? Can we use multiple variables to find risks on key interesting ones? We will take a fun journey culminating in the most recent developments in the field. What methods work well and which break? What can we use in practice?
For instance, we can show a CEO that they would miss their revenue target by over 6% for the quarter, and tell us why i.e. in what ways has their business changed over the last year. Then we provide the prioritized ordered lists of quickest, cheapest and least risky paths to help turn them over the tide, with estimates of relative costs and expected probability of success.
-
keyboard_arrow_down
Dipanjan Sarkar - Explainable Artificial Intelligence - Demystifying the Hype
45 Mins
Tutorial
Intermediate
The field of Artificial Intelligence powered by Machine Learning and Deep Learning has gone through some phenomenal changes over the last decade. Starting off as just a pure academic and research-oriented domain, we have seen widespread industry adoption across diverse domains including retail, technology, healthcare, science and many more. More than often, the standard toolbox of machine learning, statistical or deep learning models remain the same. New models do come into existence like Capsule Networks, but industry adoption of the same usually takes several years. Hence, in the industry, the main focus of data science or machine learning is more ‘applied’ rather than theoretical and effective application of these models on the right data to solve complex real-world problems is of paramount importance.
A machine learning or deep learning model by itself consists of an algorithm which tries to learn latent patterns and relationships from data without hard-coding fixed rules. Hence, explaining how a model works to the business always poses its own set of challenges. There are some domains in the industry especially in the world of finance like insurance or banking where data scientists often end up having to use more traditional machine learning models (linear or tree-based). The reason being that model interpretability is very important for the business to explain each and every decision being taken by the model.However, this often leads to a sacrifice in performance. This is where complex models like ensembles and neural networks typically give us better and more accurate performance (since true relationships are rarely linear in nature).We, however, end up being unable to have proper interpretations for model decisions.
To address and talk about these gaps, I will take a conceptual yet hands-on approach where we will explore some of these challenges in-depth about explainable artificial intelligence (XAI) and human interpretable machine learning and even showcase with some examples using state-of-the-art model interpretation frameworks in Python!
-
keyboard_arrow_down
Avishkar Gupta / Dipanjan Sarkar - Leveraging AI to Enhance Developer Productivity & Confidence
Avishkar GuptaData ScientistRed HatDipanjan SarkarData Science LeadApplied Materialsschedule 3 years ago
45 Mins
Tutorial
Intermediate
A major approach to the application of AI is leveraging it to create a safer world around us, as well as that of helping people make choices. With the open source revolution having taken the world by a storm and developers relying on various upstream third party dependencies (too many to chose from!:http://www.modulecounts.com/) to develop applications moving petabytes of sensitive data and mission critical code that can lead to disastrous failures, it is required now more than ever to build better developer tooling to help developers make safer, better choices in terms of their dependencies as well as providing them with more insights around the code they are using. Thanks to deep learning, we are able to tackle these complex problems and this talk would be covering two diverse and interesting problems we have been trying to solve leveraging deep learning models (recommenders and NLP).
Though we are data scientists, at heart we are also developers building intelligent systems powered by AI. We, the Redhat developer group through our “Dependency Analytics” platform and extension, seek to do the same. We call this, 'AI-based insights for developers by developers'!
In this session we would be going into the details of the deep learning models we have implemented and deployed to solve two major problems:
- Dependency Recommendations: Recommend dependencies to a user for their specific application stack by trying to guess their intent by leveraging deep learning based recommender models.
- Pro-active Security and Vulnerability Analysis: We would also touch upon how our platform aims to make developer applications safer by way of CVE (Common Vulnerabilities and Exposures) analyses and the experimental deep learning models we have built to proactively identify potential vulnerabilities. We will talk about how we leveraged deep learning models for NLP to tackle this problem.
This shall be followed by a short architectural overview of the entire platform.
If we have enough time, we intend to showcase some sample code as a part of a tutorial of how we built these deep learning models and do a walkthrough of the same!
-
keyboard_arrow_down
Dr. C.S.Jyothirmayee / Usha Rengaraju / Vijayalakshmi Mahadevan - Deep learning powered Genomic Research
Dr. C.S.JyothirmayeeSr. ScientistNovozymes South Asia Pvt LtdUsha RengarajuPrincipal Data ScientistMysuru Consulting GroupVijayalakshmi MahadevanFaculty ScientistInstitute of Bioinformatics and Applied Biotechnology (IBAB)schedule 3 years ago
90 Mins
Workshop
Advanced
The event disease happens when there is a slip in the finely orchestrated dance between physiology, environment and genes. Treatment with chemicals (natural, synthetic or combination) solved some diseases but others persisted and got propagated along the generations. Molecular basis of disease became prime center of studies to understand and to analyze root cause. Cancer also showed a way that origin of disease, detection, prognosis and treatment along with cure was not so uncomplicated process. Treatment of diseases had to be done case by case basis (no one size fits).
With the advent of next generation sequencing, high through put analysis, enhanced computing power and new aspirations with neural network to address this conundrum of complicated genetic elements (structure and function of various genes in our systems). This requires the genomic material extraction, their sequencing (automated system) and analysis to map the strings of As, Ts, Gs, and Cs which yields genomic dataset. These datasets are too large for traditional and applied statistical techniques. Consequently, the important signals are often incredibly small along with blaring technical noise. This further requires far more sophisticated analysis techniques. Artificial intelligence and deep learning gives us the power to draw clinically useful information from the genetic datasets obtained by sequencing.
Precision of these analyses have become vital and way forward for disease detection, its predisposition, empowers medical authorities to make fair and situationally decision about patient treatment strategies. This kind of genomic profiling, prediction and mode of disease management is useful to tailoring FDA approved treatment strategies based on these molecular disease drivers and patient’s molecular makeup.
Now, the present scenario encourages designing, developing, testing of medicine based on existing genetic insights and models. Deep learning models are helping to analyze and interpreting tiny genetic variations ( like SNPs – Single Nucleotide Polymorphisms) which result in unraveling of crucial cellular process like metabolism, DNA wear and tear. These models are also responsible in identifying disease like cancer risk signatures from various body fluids. They have the immense potential to revolutionize healthcare ecosystem. Clinical data collection is not streamlined and done in a haphazard manner and the requirement of data to be amenable to a uniform fetchable and possibility to be combined with genetic information would power the value, interpretation and decisive patient treatment modalities and their outcomes.
There is hugh inflow of medical data from emerging human wearable technologies, along with other health data integrated with ability to do quickly carry out complex analyses on rich genomic databases over the cloud technologies … would revitalize disease fighting capability of humans. Last but still upcoming area of application in direct to consumer genomics (success of 23andMe).
This road map promises an end-to-end system to face disease in its all forms and nature. Medical research, and its applications like gene therapies, gene editing technologies like CRISPR, molecular diagnostics and precision medicine could be revolutionized by tailoring a high-throughput computing method and its application to enhanced genomic datasets.
-
keyboard_arrow_down
Ramanathan R / Gurram Poorna Prudhvi - Time Series analysis in Python
Ramanathan RDirectorZentropy TechnologiesGurram Poorna PrudhviMachine Learning Engineeermroadsschedule 4 years ago
240 Mins
Workshop
Intermediate
“Time is precious so is Time Series Analysis”
Time series analysis has been around for centuries helping us to solve from astronomical problems to business problems and advanced scientific research around us now. Time stores precious information, which most machine learning algorithms don’t deal with. But time series analysis, which is a mix of machine learning and statistics helps us to get useful insights. Time series can be applied to various fields like economy forecasting, budgetary analysis, sales forecasting, census analysis and much more. In this workshop, We will look at how to dive deep into time series data and make use of deep learning to make accurate predictions.
Structure of the workshop goes like this
- Introduction to Time series analysis
- Time Series Exploratory Data Analysis and Data manipulation with pandas
- Forecast Time series data with some classical method (AR, MA, ARMA, ARIMA, GARCH, E-GARCH)
- Introduction to Deep Learning and Time series forecasting using MLP and LSTM
- Forecasting using XGBoost
- Financial Time Series data
Libraries Used:
- Keras (with Tensorflow backend)
- matplotlib
- pandas
- statsmodels
- sklearn
- seaborn
- arch
-
keyboard_arrow_down
Dr. Rahee Walambe / Vishal Gokhale - Processing Sequential Data using RNNs
Dr. Rahee WalambeResearch and Teaching FacultySymbiosis Centre for Applied Artificial Intelligence (SCAAI)Vishal GokhaleSr. ConsultantXnsioschedule 3 years ago
480 Mins
Workshop
Beginner
Data that forms the basis of many of our daily activities like speech, text, videos has sequential/temporal dependencies. Traditional deep learning models, being inadequate to model this connectivity needed to be made recurrent before they brought technologies such as voice assistants (Alexa, Siri) or video based speech translation (Google Translate) to a practically usable form by reducing the Word Error Rate (WER) significantly. RNNs solve this problem by adding internal memory. The capacities of traditional neural networks are bolstered with this addition and the results outperform the conventional ML techniques wherever the temporal dynamics are more important.
In this full-day immersive workshop, participants will develop an intuition for sequence models through hands-on learning along with the mathematical premise of RNNs. -
keyboard_arrow_down
Venkatraman J - Entity Co-occurence and Entity Reputation scoring from Unstructured data using Semantic Knowledge graph
20 Mins
Talk
Intermediate
Knowledge representation has been a research for many years in AI world and its continuing further too. Once knowledge is represented, reasoning from that extracted knowledge is done by various inferencing techniques. Initial knowledge bases were built using rules from domain experts and different inferencing techniques like Fuzzy inference, Bayesian inference were applied to extract reasoning from those knowledge bases. Semantic networks is another form of knowledge representation which can represent structured data like WordNet, DBpedia which solves problems in a specific domain by storing entities and relations among entities using onotologies.
Knowledge graph is another representation technique deeply researched in academia as well as used by businesses in production to augment search relevancy in information retrieval(Google knowledgegraph), improve recommender systems, semantic search applications and also Question answering problems.In this talk i will illustrate the benefits of semantic knowledge graph, how it differs from Semantic ontologies, different technologies involved in building knowledge graph, how i built one to analyse unstructured (twitter data) to discover hidden relationships from the twitter corpus. I will also show how Knowledge graph is data scientist's tool kit to discover hidden relationships and insights from unstructured data quickly.
In this talk i will show the technology and architecture used to determine entity reputation and entity co-occurence using Knowledge graph.Scoring an entity for reputation is useful in many Natural language processing tasks and applications such as Recommender systems.
-
keyboard_arrow_down
Suvro Shankar Ghosh - Learning Entity embedding’s form Knowledge Graph
Suvro Shankar GhoshData ScientistAtos Global IT Solutions And Services Private Limitedschedule 3 years ago
45 Mins
Case Study
Intermediate
- Over a period of time, a lot of Knowledge bases have evolved. A knowledge base is a structured way of storing information, typically in the following form Subject, Predicate, Object
- Such Knowledge bases are an important resource for question answering and other tasks. But they often suffer from their incompleteness to resemble all the data in the world, and thereby lack of ability to reason over their discrete Entities and their unknown relationships. Here we can introduce an expressive neural tensor network that is suitable for reasoning over known relationships between two entities.
- With such a model in place, we can ask questions, the model will try to predict the missing data links within the trained model and answer the questions, related to finding similar entities, reasoning over them and predicting various relationship types between two entities, not connected in the Knowledge Graph.
- Knowledge Graph infoboxes were added to Google's search engine in May 2012
What is the knowledge graph?
▶Knowledge in graph form!
▶Captures entities, attributes, and relationships
▶More specifically, the “knowledge graph” is a database that collects millions of pieces of data about keywords people frequently search for on the World wide web and the intent behind those keywords, based on the already available content
▶In most cases, KGs is based on Semantic Web standards and have been generated by a mixture of automatic extraction from text or structured data, and manual curation work.
▶Structured Search & Exploration
e.g. Google Knowledge Graph, Amazon Product Graph▶Graph Mining & Network Analysis
e.g. Facebook Entity Graph▶Big Data Integration
e.g. IBM Watson▶Diffbot, GraphIQ, Maana, ParseHub, Reactor Labs, SpazioDati
-
keyboard_arrow_down
Dr. Atul Singh - Endow the gift of eloquence to your NLP applications using pre-trained word embeddings
45 Mins
Talk
Beginner
Word embeddings are the plinth stones of Natural Language Processing (NLP) applications, used to transform human language into vectors that can be understood and processed by machine learning algorithms. Pre-trained word embeddings enable transfer of prior knowledge about the human language into a new application thereby enabling rapid creation of a scalable and efficient NLP applications. Since the emergence of word2vec in 2013, the word embeddings field has seen rapid developments by leaps and bounds with each new successive word embedding outperforming the prior one.
The goal of this talk is to demonstrate the efficacy of using pre-trained word embedding to create scalable and robust NLP applications, and to explain to the audience the underlying theory of word embeddings that makes it possible. The talk will cover prominent word vector embeddings such as BERT and ELMo from the recent literature.
-
keyboard_arrow_down
Favio Vázquez - Complete Data Science Workflows with Open Source Tools
90 Mins
Tutorial
Beginner
Cleaning, preparing , transforming, exploring data and modeling it's what we hear all the time about data science, and these steps maybe the most important ones. But that's not the only thing about data science, in this talk you will learn how the combination of Apache Spark, Optimus, the Python ecosystem and Data Operations can form a whole framework for data science that will allow you and your company to go further, and beyond common sense and intuition to solve complex business problems.
-
keyboard_arrow_down
Suvro Shankar Ghosh - Real-Time Advertising Based On Web Browsing In Telecom Domain
Suvro Shankar GhoshData ScientistAtos Global IT Solutions And Services Private Limitedschedule 3 years ago
45 Mins
Case Study
Intermediate
The following section describes Telco Domain Real-time advertising based on browsing use case in terms of :
- Potential business benefits to earn.
- Functional use case architecture depicted.
- Data sources (attributes required).
- Analytic to be performed,
- Output to be provided and target systems to be integrated with.
This use case is part of the monetization category. The goal of the use case is to provide a kind of DataMart to either Telecom business parties or external third parties sufficient, relevant and customized information to produce real-time advertising to Telecom end users. The customer targets are all Telecom network end-users.
The customization information to be delivered to advertise are based on several dimensions:
- Customer characteristics: demographic, telco profile.
- Customer usage: Telco products or any other interests.
- Customer time/space identification: location, zoning areas, usage time windows.
Use case requirements are detailed in the description below as “ Targeting method”
- Search Engine Targeting:
The telco will use users web history to track what users are looking at and to gather information about them. When a user goes onto a website, their web browsing history will show information of the user, what he or she searched, where they are from, found by the ip address, and then build a profile around them, allowing Telco to easily target ads to the user more specifically.
- Content and Contextual Targeting:
This is when advertisers can put ads in a specific place, based on the relative content present. This targeting method can be used across different mediums, for example in an article online, about purchasing homes would have an advert associated with this context, like an insurance ad. This is achieved through an ad matching system which analyses the contents on a page or finds keywords and presents a relevant advert, sometimes through pop-ups.
- Technical Targeting
This form of targeting is associated with the user’s own software or hardware status. The advertisement is altered depending on the user’s available network bandwidth, for example if a user is on their mobile phone that has a limited connection, the ad delivery system will display a version of the ad that is smaller for a faster data transfer rate.
- Time Targeting:
This type of targeting is centered around time and focuses on the idea of fitting in around people’s everyday lifestyles. For example, scheduling specific ads at a timeframe from 5-7pm, when the
- Sociodemographic Targeting:
This form of targeting focuses on the characteristics of consumers, including their age, gender, and nationality. The idea is to target users specifically, using this data about them collected, for example, targeting a male in the age bracket of 18-24. The telco will use this form of targeting by showing advertisements relevant to the user’s individual demographic profile. this can show up in forms of banner ads, or commercial videos.
- Geographical and Location-Based Targeting:
This type of advertising involves targeting different users based on their geographic location. IP addresses can signal the location of a user and can usually transfer the location through different cells.
- Behavioral Targeting:
This form of targeted advertising is centered around the activity/actions of users and is more easily achieved on web pages. Information from browsing websites can be collected, which finds patterns in users search history.
- Retargeting:
Is where advertising uses behavioral targeting to produce ads that follow you after you have looked or purchased are a particular item. Retargeting is where advertisers use this information to ‘follow you’ and try and grab your attention so you do not forget.
- Opinions, attitudes, interests, and hobbies:
Psychographic segmentation also includes opinions on gender and politics, sporting and recreational activities, views on the environment and arts and cultural issues.
-
keyboard_arrow_down
Pankaj Kumar / Abinash Panda / Usha Rengaraju - Quantitative Finance :Global macro trading strategy using Probabilistic Graphical Models
Pankaj KumarQuantitative Research AssociateStatestreet Global AdvisorsAbinash PandaCEOProdios LabsUsha RengarajuPrincipal Data ScientistMysuru Consulting Groupschedule 3 years ago
90 Mins
Workshop
Advanced
{ This is a handson workshop in pgmpy package. The creator of pgmpy package Abinash Panda will do the code demonstration }
Crude oil plays an important role in the macroeconomic stability and it heavily influences the performance of the global financial markets. Unexpected fluctuations in the real price of crude oil are detrimental to the welfare of both oil-importing and oil-exporting economies.Global macro hedge-funds view forecast the price of oil as one of the key variables in generating macroeconomic projections and it also plays an important role for policy makers in predicting recessions.
Probabilistic Graphical Models can help in improving the accuracy of existing quantitative models for crude oil price prediction as it takes in to account many different macroeconomic and geopolitical variables .
Hidden Markov Models are used to detect underlying regimes of the time-series data by discretising the continuous time-series data. In this workshop we use Baum-Welch algorithm for learning the HMMs, and Viterbi Algorithm to find the sequence of hidden states (i.e. the regimes) given the observed states (i.e. monthly differences) of the time-series.
Belief Networks are used to analyse the probability of a regime in the Crude Oil given the evidence as a set of different regimes in the macroeconomic factors . Greedy Hill Climbing algorithm is used to learn the Belief Network, and the parameters are then learned using Bayesian Estimation using a K2 prior. Inference is then performed on the Belief Networks to obtain a forecast of the crude oil markets, and the forecast is tested on real data.
-
keyboard_arrow_down
Shalini Sinha / Ashok J / Yogesh Padmanaban - Hybrid Classification Model with Topic Modelling and LSTM Text Classifier to identify key drivers behind Incident Volume
45 Mins
Case Study
Intermediate
Incident volume reduction is one of the top priorities for any large-scale service organization along with timely resolution of incidents within the specified SLA parameters. AI and Machine learning solutions can help IT service desk manage the Incident influx as well as resolution cost by
- Identifying major topics from incident description and planning resource allocation and skill-sets accordingly
- Producing knowledge articles and resolution summary of similar incidents raised earlier
- Analyzing Root Causes of incidents and introducing processes and automation framework to predict and resolve them proactively
We will look at different approaches to combine standard document clustering algorithms such as Latent Dirichlet Allocation (LDA) and K-mean clustering on doc2vec along-with Text classification to produce easily interpret-able document clusters with semantically coherent/ text representation that helped IT operations of a large FMCG client identify key drivers/topics contributing towards incident volume and take necessary action on it.
-
keyboard_arrow_down
Saikat Sarkar / Dhanya Parameshwaran / Dr Sweta Choudhary / Srikanth Ramaswamy / Usha Rengaraju - AI meets Neuroscience
Saikat SarkarSr. Consultant Manager - AA & Human Data ScienceIMS HealthDhanya ParameshwaranData ScientistSAP LabsDr Sweta ChoudharyHead - Medical Products & ServicesMedwell VenturesSrikanth RamaswamyGroup Leader and Sr. ScientistBlue Brain Project, EPFLUsha RengarajuPrincipal Data ScientistMysuru Consulting Groupschedule 3 years ago
480 Mins
Workshop
Advanced
This is a mixer workshop with lot of clinicians , medical experts , Neuroimaging experts ,Neuroscientists, data scientists and statisticians will come under one roof to bring together this revolutionary workshop.
The theme will be updated soon .
Our celebrity and distinguished presenter Srikanth Ramaswamy who is an advisor at Mysuru Consulting Group and also works Blue Brain Project at the EPFL will be delivering an expert talk in the workshop.
https://www.linkedin.com/in/ramaswamysrikanth/
{ This workshop will be a combination of panel discussions , expert talk and neuroimaging data science workshop ( applying machine learning and deep learning algorithms to Neuroimaging data sets}
{ We are currently onboarding several experts from Neuroscience domain --Neurosurgeons , Neuroscientists and Computational Neuroscientists .Details of the speakers will be released soon }
Abstract for the Neuroimaging Data Science Part of the workshop:
The study of the human brain with neuroimaging technologies is at the cusp of an exciting era of Big Data. Many data collection projects, such as the NIH-funded Human Connectome Project, have made large, high- quality datasets of human neuroimaging data freely available to researchers. These large data sets promise to provide important new insights about human brain structure and function, and to provide us the clues needed to address a variety of neurological and psychiatric disorders. However, neuroscience researchers still face substantial challenges in capitalizing on these data, because these Big Data require a different set of technical and theoretical tools than those that are required for analyzing traditional experimental data. These skills and ideas, collectively referred to as Data Science, include knowledge in computer science and software engineering, databases, machine learning and statistics, and data visualization.
The workshop covers Data analysis, statistics and data visualization and applying cutting-edge analytics to complex and multimodal neuroimaging datasets . Topics which will be covered in this workshop are statistics, associative techniques, graph theoretical analysis, causal models, nonparametric inference, and meta-analytical synthesis.
-
keyboard_arrow_down
Antrixsh Gupta - Creating Custom Interactive Data Visualization Dashboards with Bokeh
90 Mins
Workshop
Beginner
This will be a hands-on workshop how to build a custom interactive dashboard application on your local machine or on any cloud service provider. You will also learn how to deploy this application with both security and scalability in mind.
Powerful Data visualization software solutions are extremely useful when building interactive data visualization dashboards. However, these types of solutions might not provide sufficient customization options. For those scenarios, you can use open source libraries like D3.js, Chart.js, or Bokeh to create custom dashboards. While these libraries offer a lot of flexibility for building dashboards with tailored features and visualizations.
-
keyboard_arrow_down
Dr. Mayuri Mehta - Demonstration of Deep Learning based Healthcare Applications
Dr. Mayuri MehtaProfessor & PG In-ChargeDepartment of Computer Engineering, Sarvajanik College of Engineering and Technologyschedule 3 years ago
45 Mins
Demonstration
Intermediate
Recent advancements in AI are proving beneficial in development of applications in various spheres of healthcare sector such as microbiological analysis, discovery of drug, disease diagnosis, Genomics, medical imaging and bioinformatics for translating a large-scale data into improved human healthcare. Automation in healthcare using machine learning/deep learning assists physicians to make faster, cheaper and more accurate diagnoses.
Due to increasing availability of electronic healthcare data (structured as well as unstructured data) and rapid progress of analytics techniques, a lot of research is being carried out in this area. Popular AI techniques include machine learning/deep learning for structured data and natural language processing for unstructured data. Guided by relevant clinical questions, powerful deep learning techniques can unlock clinically relevant information hidden in the massive amount of data, which in turn can assist clinical decision making.
We have successfully developed three deep learning based healthcare applications using TensorFlow and are currently working on three more healthcare related projects. In this demonstration session, first we shall briefly discuss the significance of deep learning for healthcare solutions. Next, we will demonstrate two deep learning based healthcare applications developed by us. The discussion of each application will include precise problem statement, proposed solution, data collected & used, experimental analysis and challenges encountered & overcame to achieve this success. Finally, we will briefly discuss the other applications on which we are currently working and the future scope of research in this area.
-
keyboard_arrow_down
Saurabh Jha / Rohan Shravan / Usha Rengaraju - Hands on Deep Learning for Computer Vision
Saurabh JhaDeep Learning ArchitectDellRohan ShravanUsha RengarajuPrincipal Data ScientistMysuru Consulting Groupschedule 3 years ago
480 Mins
Workshop
Intermediate
Computer Vision has lots of applications including medical imaging, autonomous
vehicles, industrial inspection and augmented reality. Use of Deep Learning for
computer Vision can be categorized into multiple categories for both images and
videos – Classification, detection, segmentation & generation.
Having worked in Deep Learning with a focus on Computer Vision have come
across various challenges and learned best practices over a period
experimenting with cutting edge ideas. This workshop is for Data Scientists &
Computer Vision Engineers whose focus is deep learning. We will cover state of
the art architectures for Image Classification, Segmentation and practical tips &
tricks to train a deep neural network models. It will be hands on session where
every concepts will be introduced through python code and our choice of deep
learning framework will be PyTorch v1.0 and Keras.Given we have only 8 hours, we will cover the most important fundamentals,
current techniques and avoid anything which is obsolete or not being used by
state-of-art algorithms. We will directly start with building the intuition for
Convolutional Neural Networks, and focus on core architectural problems. We
will try and answer some of the hard questions like how many layers must be
there in a network, how many kernels should we add. We will look at the
architectural journey of some of the best papers and discover what each brought
into the field of Vision AI, making today’s best networks possible. We will cover 9
different kinds of Convolutions which will cover a spectrum of problems like
running DNNs on constrained hardware, super-resolution, image segmentation,
etc. The concepts would be good enough for all of us to move to harder problems
like segmentation or super-resolution later, but we will focus on object
recognition, followed by object detections. We will build our networks step by
step, learning how optimizations techniques actually improve our networks and
exactly when should we introduce them. We hope the leave you in confidence
which will help you read research papers like your second nature. Given we have
8 hours, and we want the sessions to be productive, we will instead of introducingall the problems and solutions, focus on the fundamentals of modern deep neural
networks.