Since we originally proposed the need for a first-class language, compiler and ecosystem for machine learning (ML) - a view that is increasingly shared by many, there have been plenty of interesting developments in the field. Not only have the tradeoffs in existing systems, such as TensorFlow and PyTorch, not been resolved, but they are clearer than ever now that both frameworks contain distinct "static graph" and "eager execution" interfaces. Meanwhile, the idea of ML models fundamentally being differentiable algorithms – often called differentiable programming – has caught on.

Where current frameworks fall short, several exciting new projects have sprung up that dispense with graphs entirely, to bring differentiable programming to the mainstream. Myia, by the Theano team, differentiates and compiles a subset of Python to high-performance GPU code. Swift for TensorFlow extends Swift so that compatible functions can be compiled to TensorFlow graphs. And finally, the Flux ecosystem is extending Julia’s compiler with a number of ML-focused tools, including first-class gradients, just-in-time CUDA kernel compilation, automatic batching and support for new hardware such as TPUs.

This talk will demonstrate how Julia is increasingly becoming a natural language for machine learning, the kind of libraries and applications the Julia community is building, the contributions from India (there are many!), and our plans going forward.

 
6 favorite thumb_down thumb_up 1 comment visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Target Audience

All

schedule Submitted 1 month ago

Public Feedback

comment Suggest improvements to the Speaker
  • Anoop Kulkarni
    By Anoop Kulkarni  ~  1 month ago
    reply Reply

    Viral, thanks for your submission. I have been using Julia for some time now as part of quantum computing.. moving only recently to Julia for machine learning.. looking foward to this talk from you given your pedigree in the area


    ~anoop


  • Liked Dr. Vikas Agrawal
    keyboard_arrow_down

    Dr. Vikas Agrawal - Non-Stationary Time Series: Finding Relationships Between Changing Processes for Enterprise Prescriptive Systems

    45 Mins
    Talk
    Intermediate

    It is too tedious to keep on asking questions, seek explanations or set thresholds for trends or anomalies. Why not find problems before they happen, find explanations for the glitches and suggest shortest paths to fixing them? Businesses are always changing along with their competitive environment and processes. No static model can handle that. Using dynamic models that find time-delayed interactions between multiple time series, we need to make proactive forecasts of anomalous trends of risks and opportunities in operations, sales, revenue and personnel, based on multiple factors influencing each other over time. We need to know how to set what is “normal” and determine when the business processes from six months ago do not apply any more, or only applies to 35% of the cases today, while explaining the causes of risk and sources of opportunity, their relative directions and magnitude, in the context of the decision-making and transactional applications, using state-of-the-art techniques.

    Real world processes and businesses keeps changing, with one moving part changing another over time. Can we capture these changing relationships? Can we use multiple variables to find risks on key interesting ones? We will take a fun journey culminating in the most recent developments in the field. What methods work well and which break? What can we use in practice?

    For instance, we can show a CEO that they would miss their revenue target by over 6% for the quarter, and tell us why i.e. in what ways has their business changed over the last year. Then we provide the prioritized ordered lists of quickest, cheapest and least risky paths to help turn them over the tide, with estimates of relative costs and expected probability of success.

  • Liked Maryam Jahanshahi
    keyboard_arrow_down

    Maryam Jahanshahi - Applying Dynamic Embeddings in Natural Language Processing to Analyze Text over Time

    Maryam Jahanshahi
    Maryam Jahanshahi
    Research Scientist
    TapRecruit
    schedule 3 months ago
    Sold Out!
    45 Mins
    Case Study
    Intermediate

    Many data scientists are familiar with word embedding models such as word2vec, which capture semantic similarity of words in a large corpus. However, word embeddings are limited in their ability to interrogate a corpus alongside other context or over time. Moreover, word embedding models either need significant amounts of data, or tuning through transfer learning of a domain-specific vocabulary that is unique to most commercial applications.

    In this talk, I will introduce exponential family embeddings. Developed by Rudolph and Blei, these methods extend the idea of word embeddings to other types of high-dimensional data. I will demonstrate how they can be used to conduct advanced topic modeling on datasets that are medium-sized, which are specialized enough to require significant modifications of a word2vec model and contain more general data types (including categorical, count, continuous). I will discuss how my team implemented a dynamic embedding model using Tensor Flow and our proprietary corpus of job descriptions. Using both categorical and natural language data associated with jobs, we charted the development of different skill sets over the last 3 years. I will specifically focus the description of results on how tech and data science skill sets have developed, grown and pollinated other types of jobs over time.

  • Liked Favio Vázquez
    keyboard_arrow_down

    Favio Vázquez - Complete Data Science Workflows with Open Source Tools

    90 Mins
    Tutorial
    Beginner

    Cleaning, preparing , transforming, exploring data and modeling it's what we hear all the time about data science, and these steps maybe the most important ones. But that's not the only thing about data science, in this talk you will learn how the combination of Apache Spark, Optimus, the Python ecosystem and Data Operations can form a whole framework for data science that will allow you and your company to go further, and beyond common sense and intuition to solve complex business problems.

  • Liked Saurabh Jha
    keyboard_arrow_down

    Saurabh Jha / Usha Rengaraju - Hands on Deep Learning for Computer Vision – Techniques for Image Segmentation

    480 Mins
    Workshop
    Intermediate

    Computer Vision has lots of applications including medical imaging, autonomous vehicles, industrial inspection and augmented reality. Use of Deep Learning for computer Vision can be categorized into multiple categories for both images and videos – Classification, detection, segmentation & generation.

    Having worked in Deep Learning with a focus on Computer Vision have come across various challenges and learned best practices over a period experimenting with cutting edge ideas. This workshop is for Data Scientists & Computer Vision Engineers whose focus is deep learning. We will cover state of the art architectures for Image Segmentation and practical tips & tricks to train a deep neural network models. It will be hands on session where every concepts will be introduced through python code and our choice of deep learning framework will be PyTorch v1.0.

    The workshop takes a structured approach. First it covers basic techniques in image processing and python for handling images and building Pytorch data loaders. Then we introduce how image segmentation was done in pre CNN era and cover clustering techniques for segmentation. Start with basics of neural networks and introduce Convolutional neural networks and cover advanced architecture – Resnet. Introduce the idea of Fully Convolutional Paper and it’s impact on Semantic Segmentation. Cover latest semantic segmentation architecture with code and basics of scene text understanding in pytorch with how to run carefully designed experiments using callbacks, hooks. Introduce discriminative learning rate and mixed precision to train deep neural network models. Idea is to bridge the gap between theory and practice and teach how to run practical experiments and tune deep learning based systems by covering tricks introduced in various research papers. Discuss in-depth on the interaction between batchnorm, weight decay and learning rate.

  • Liked Pankaj Kumar
    keyboard_arrow_down

    Pankaj Kumar / Abinash Panda / Usha Rengaraju - Quantitative Finance :Global macro trading strategy using Probabilistic Graphical Models

    90 Mins
    Workshop
    Advanced

    Crude oil plays an important role in the macroeconomic stability and it heavily influences the performance of the global financial markets. Unexpected fluctuations in the real price of crude oil are detrimental to the welfare of both oil-importing and oil-exporting economies.Global macro hedge-funds view forecast the price of oil as one of the key variables in generating macroeconomic projections and it also plays an important role for policy makers in predicting recessions.

    Probabilistic Graphical Models can help in improving the accuracy of existing quantitative models for crude oil price prediction as it takes in to account many different macroeconomic and geopolitical variables .

    Hidden Markov Models are used to detect underlying regimes of the time-series data by discretising the continuous time-series data. In this workshop we use Baum-Welch algorithm for learning the HMMs, and Viterbi Algorithm to find the sequence of hidden states (i.e. the regimes) given the observed states (i.e. monthly differences) of the time-series.

    Belief Networks are used to analyse the probability of a regime in the Crude Oil given the evidence as a set of different regimes in the macroeconomic factors . Greedy Hill Climbing algorithm is used to learn the Belief Network, and the parameters are then learned using Bayesian Estimation using a K2 prior. Inference is then performed on the Belief Networks to obtain a forecast of the crude oil markets, and the forecast is tested on real data.

  • Liked Shrutika Poyrekar
    keyboard_arrow_down

    Shrutika Poyrekar / kiran karkera / Usha Rengaraju - Introduction to Bayesian Networks

    90 Mins
    Workshop
    Beginner

    Most machine learning models assume independent and identically distributed (i.i.d) data. Graphical models can capture almost arbitrarily rich dependency structures between variables. They encode conditional independence structure with graphs. Bayesian network, a type of graphical model describes a probability distribution among all variables by putting edges between the variable nodes, wherein edges represent the conditional probability factor in the factorized probability distribution. Thus Bayesian Networks provide a compact representation for dealing with uncertainty using an underlying graphical structure and the probability theory. These models have a variety of applications such as medical diagnosis, biomonitoring, image processing, turbo codes, information retrieval, document classification, gene regulatory networks, etc. amongst many others. These models are interpretable as they are able to capture the causal relationships between different features .They can work efficiently with small data and also deal with missing data which gives it more power than conventional machine learning and deep learning models.

    In this session, we will discuss concepts of conditional independence, d- separation , Hammersley Clifford theorem , Bayes theorem, Expectation Maximization and Variable Elimination. There will be a code walk through of simple case study.

  • Liked Saikat Sarkar
    keyboard_arrow_down

    Saikat Sarkar / Dr Sweta Choudhary / Raunak Bhandari / Srikanth Ramaswamy / Usha Rengaraju - AI meets Neuroscience

    480 Mins
    Workshop
    Advanced

    This is a mixer workshop with lot of clinicians , medical experts , Neuroimaging experts ,Neuroscientists, data scientists and statisticians will come under one roof to bring together this revolutionary workshop.

    The theme will be updated soon .

    Our celebrity and distinguished presenter Srikanth Ramaswamy who is an advisor at Mysuru Consulting Group and also works Blue Brain Project at the EPFL will be delivering an expert talk in the workshop.

    https://www.linkedin.com/in/ramaswamysrikanth/

    { This workshop will be a combination of panel discussions , expert talk and neuroimaging data science workshop ( applying machine learning and deep learning algorithms to Neuroimaging data sets}

    { We are currently onboarding several experts from Neuroscience domain --Neurosurgeons , Neuroscientists and Computational Neuroscientists .Details of the speakers will be released soon }

    Abstract for the Neuroimaging Data Science Part of the workshop:

    The study of the human brain with neuroimaging technologies is at the cusp of an exciting era of Big Data. Many data collection projects, such as the NIH-funded Human Connectome Project, have made large, high- quality datasets of human neuroimaging data freely available to researchers. These large data sets promise to provide important new insights about human brain structure and function, and to provide us the clues needed to address a variety of neurological and psychiatric disorders. However, neuroscience researchers still face substantial challenges in capitalizing on these data, because these Big Data require a different set of technical and theoretical tools than those that are required for analyzing traditional experimental data. These skills and ideas, collectively referred to as Data Science, include knowledge in computer science and software engineering, databases, machine learning and statistics, and data visualization.

    The workshop covers Data analysis, statistics and data visualization and applying cutting-edge analytics to complex and multimodal neuroimaging datasets . Topics which will be covered in this workshop are statistics, associative techniques, graph theoretical analysis, causal models, nonparametric inference, and meta-analytical synthesis.

  • Liked Dr. C.S.Jyothirmayee
    keyboard_arrow_down

    Dr. C.S.Jyothirmayee / Usha Rengaraju / Vijayalakshmi Mahadevan - Deep learning powered Genomic Research

    90 Mins
    Workshop
    Advanced

    The event disease happens when there is a slip in the finely orchestrated dance between physiology, environment and genes. Treatment with chemicals (natural, synthetic or combination) solved some diseases but others persisted and got propagated along the generations. Molecular basis of disease became prime center of studies to understand and to analyze root cause. Cancer also showed a way that origin of disease, detection, prognosis and treatment along with cure was not so uncomplicated process. Treatment of diseases had to be done case by case basis (no one size fits).

    With the advent of next generation sequencing, high through put analysis, enhanced computing power and new aspirations with neural network to address this conundrum of complicated genetic elements (structure and function of various genes in our systems). This requires the genomic material extraction, their sequencing (automated system) and analysis to map the strings of As, Ts, Gs, and Cs which yields genomic dataset. These datasets are too large for traditional and applied statistical techniques. Consequently, the important signals are often incredibly small along with blaring technical noise. This further requires far more sophisticated analysis techniques. Artificial intelligence and deep learning gives us the power to draw clinically useful information from the genetic datasets obtained by sequencing.

    Precision of these analyses have become vital and way forward for disease detection, its predisposition, empowers medical authorities to make fair and situationally decision about patient treatment strategies. This kind of genomic profiling, prediction and mode of disease management is useful to tailoring FDA approved treatment strategies based on these molecular disease drivers and patient’s molecular makeup.

    Now, the present scenario encourages designing, developing, testing of medicine based on existing genetic insights and models. Deep learning models are helping to analyze and interpreting tiny genetic variations ( like SNPs – Single Nucleotide Polymorphisms) which result in unraveling of crucial cellular process like metabolism, DNA wear and tear. These models are also responsible in identifying disease like cancer risk signatures from various body fluids. They have the immense potential to revolutionize healthcare ecosystem. Clinical data collection is not streamlined and done in a haphazard manner and the requirement of data to be amenable to a uniform fetchable and possibility to be combined with genetic information would power the value, interpretation and decisive patient treatment modalities and their outcomes.

    There is hugh inflow of medical data from emerging human wearable technologies, along with other health data integrated with ability to do quickly carry out complex analyses on rich genomic databases over the cloud technologies … would revitalize disease fighting capability of humans. Last but still upcoming area of application in direct to consumer genomics (success of 23andMe).

    This road map promises an end-to-end system to face disease in its all forms and nature. Medical research, and its applications like gene therapies, gene editing technologies like CRISPR, molecular diagnostics and precision medicine could be revolutionized by tailoring a high-throughput computing method and its application to enhanced genomic datasets.

  • Liked Anupam Purwar
    keyboard_arrow_down

    Anupam Purwar - An Industrial IoT system for wireless instrumentation: Development, Prototyping and Testing

    45 Mins
    Talk
    Intermediate

    The next generation machinery viz. turbines, aircraft and boilers will rely heavily on smart data acquisition and monitoring to meet their performance and reliability requirements. These systems require the accurate acquisition of various parameters like pressure, temperature and heat flux in real time for structural health monitoring, automation and intelligent control. This calls for the use of sophisticated instrumentation to measure these parameters and transmit them in real time. In the present work, a wireless sensor network (WSN) based on a novel high-temperature thermocouple cum heat flux sensor has been proposed. The architecture of this WSN has been evolved keeping in mind its robustness, safety and affordability. WiFi communication protocol based on IEEE 802.11 b/g/n specification has been utilized to create a secure and low power WSN. The thermocouple cum heat flux sensor and instrumentation enclosure have been designed using rigorous finite element modelling. The sensor and wireless transmission unit have been housed in an enclosure capable of withstanding temperature and pressure in the range of 100 bars and 2500K respectively. The sensor signal is conditioned before being passed to the wireless ESP8266 based ESP12E transmitter, which transmits data to a web server. This system uploads the data to a cloud database in real time. Thus, providing seamless data availability to decision maker sitting across the globe without any time lag and with ultra-low power consumption. The real-time data is envisaged to be used for structural health monitoring of hot structures by identifying patterns of temperature rise which have historically resulted in damage using Machine learning (ML). Such type of ML application can save millions of dollars wasted in the replacement and maintenance of industrial equipment by alerting the engineers in real time.

  • Liked Indranil Basu
    keyboard_arrow_down

    Indranil Basu - Machine Generation of Recommended Image from Human Speech

    45 Mins
    Talk
    Advanced

    Introduction:

    Synthesizing audio for specific domains has many practical applications in creative sound design for music and film. But the application is not restricted to entertainment industry. We propose an architecture that will convert audio (human voice) to the voice owner’s preferred image – for the time being we restrict the intended images to two domains – Object Design and Human body. Many times, human beings are unable to describe a design (may be power-point presentation or interior decoration of a house) or a known person by verbally described attributes as they are able to visualise the same design or the person. But the other person, the listener may be unable to interpret the object or human descriptions from the speaker’s verbal descriptions as he/she is not visualising the same. Complete communication thus needs much of a trial and error and overall hazardous and time consuming. Examples of such situations are 1) While making presentation, an executive or manager can visualise something and an express to his/her employee to make the same. But, making the best slides from manger’s description may not be proper. Another relevant example is that a house owner or office owner wants his/her premises to have certain design which he/she can visualise and express to the concerned vendor. But the vendor may not be able to produce the same. Also, trial and error in this case is highly expensive. Having an automated Image, recommended to him/her can address this problem. 2) Verbal description of a terrorist or criminal suspect (facial description and/or attribute) may not be always available to all the security people every time, in Airports or Railway Stations or sensitive areas. Presence of a software system having Machine Generated Image with Ranked Recommendation for such suspect can immediately point to one or very few people in a crowded Airport or even Railway Station or any such sensitive place. Security agencies can then frisk only those people or match their attributes with existing database. This can avoid hazardous manual checking of every people in the same process and can help the security agencies to do adequate checking for those recommended individuals.

    We can use a Sequential Architecture consisting of simple NLP and more complex Deep Learning algorithms primarily based on Generative Adversarial Network (GAN) and Neural Personalised Ranking (NPR) to help the object designers and security personnel for serving their specific purposes.

    The idea to combat the problem:

    I propose a combination of Deep Learning and Recommender System approach to tackle this problem. Architecture of the Solution model consists of 4 major Components – 1) Speech to Text

    2) Text Classification into Person or Design; 3) Text to Image Formation; 4) Recommender System

    We are trying to address these four steps in consecutive applications of effective Machine Learning and Deep Learning Algorithms. Deep Learning community has already been able to make significant progress in terms of Text to Image generation and also in Ranking based Recommender System

    Brief Details about the four major pillars of this problem:

    Deep Learning based Speech Recognition – Primary technique for Speech to text could be Baidu’s DeepSpeech for which a Tensorflow implementation is readily available. Also, Google Cloud Speech-to-Text enables the develop to convert Voice to text. Voice of the user needs to be converted in .wav file. Our steps for Deep-Speech-2 are like this – Fixing GPU memory, Adding Batch normalization to RNN, implement row Convolution layer and generate text.

    Nowadays, we have quite a few free Speech to Text software, e.g. Google Docs Voice typing, windows Speech Recognition, Speech-notes etc.

    Text Classification of Content – This is needed to classify the converted text into two classes – a) Design Description or b) Human Attribute Description because these two applications and therefore image types are different. This may be Statistically easier part, but its importance is immense. A Dictionary of words related to Designs and Personal Attributes can be built using online available resources. Then, a supervised algorithm using tf-idf and Latent Semantic Analysis (LSA) should be able to classify the text into two classes – Object and Person. These are very much traditional and proven techniques in many NLP research

    Text to Image Formation – This is our main component for this proposal. Today, one of the most challenging problems in the world of Computer Vision is synthesizing high-quality images from text descriptions. In recent years, GANs have been found to generate good results. Samples generated by existing text-to-image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. There have been a few approaches to address this problem, all using GAN. One of those is given as Stacked Generative Adversarial Networks (StackGAN). Heart of such approaches is Conditional GAN which is an extension of GAN where both generator and discriminator receive additional conditioning variables c, yielding G(z, c) and D(x, c). This formulation allows G to generate images conditioned on variables c.

    In our case, we train deep convolutional generative adversarial network (DC-GAN) conditioned on text features. These text features are encoded by a hybrid character-level convolutional-recurrent neural network. Overall, DC-GAN uses text embeddings where the context of a word is of prime importance. Class label determined in the earlier step will be of help in this case. This will simply help DC-GAN to generate more relevant images than irrelevant ones. Details will be discussed during the talk

    The most straightforward way to train a conditional GAN is to view (text, image) pairs as joint observations and train the discriminator to judge pairs as real or fake. The discriminator has no explicit notion of whether real training images match the text embedding context. To account for this, in GAN-CLS, in addition to the real/fake inputs to the discriminator during training, a third type of input consisting of real images with mismatched text is added, which the discriminator must learn to score as fake. By learning to optimize image/text matching in addition to the image realism, the discriminator can provide an additional signal to the generator. (details are in talk)

    Image Recommender System – In the last step, we propose personalised image recommendation for the user from the set of images generated by GAN-CLS architecture. Image Recommendation brings down the number of choice of images to a top N (N=3, 5, 10 ideally) with a rank given to each of those and therefore user finds it easier to choose. In this case, we propose Neural Personalized Ranking (NPR) – a personalized pairwise ranking model over implicit feedback datasets – that is inspired by Bayesian Personalized Ranking (BPR) and recent advances in neural networks. We like to mention that, now NPR is improved to contextual enhanced NPR. This enhanced Model depends on implicit feedbacks from the users, its contexts and incorporates the idea of generalized matrix factorization. Contextual NPR significantly outperforms its competitors

    In the presentation, we shall describe the complete sequence in detail

  • 90 Mins
    Tutorial
    Intermediate

    Machine learning and deep learning have been rapidly adopted in providing solutions to various problems in medicine. If you wish to build scalable machine learning/deep learning-powered healthcare solutions, you need to understand how to use tools to build them.

    The TensorFlow is an open source machine learning framework. It enables the use of data flow graphs for numerical computations, with automatic parallelization across several CPUs, GPUs or TPUs. Its architecture makes it ideal for implementing machine learning/deep learning algorithms.

    This tutorial will provide hands-on exposure to implement Deep Learning based healthcare solutions using TensorFlow.

  • Liked Shalini Sinha
    keyboard_arrow_down

    Shalini Sinha / Badri Narayanan Gopalakrishnan ,PhD / Usha Rengaraju - Lifitng Up: Deep Learning for effective and efficient implementation of anti-hunger and anti-poverty programs(AI for Social Good)

    45 Mins
    Talk
    Intermediate

    Ending poverty and zero hunger are top two goals United Nations aims to achieve by 2030 under its sustainable development program. Hunger and poverty are byproducts of multiple factors and fighting them require multi-fold effort from all stakeholders. Artificial Intelligence and Machine learning has transformed the way we live, work and interact. However economics of business has limited its application to few segments of the society. A much conscious effort is needed to bring the power of AI to the benefits of the ones who actually need it the most – people below the poverty line. Here we present our thoughts on how deep learning and big data analytics can be combined to enable effective implementation of anti-poverty programs. The advancements in deep learning , micro diagnostics combined with effective technology policy is the right recipe for a progressive growth of a nation. Deep learning can help identify poverty zones across the globe based on night time images where the level of light correlates to higher economic growth. Once the areas of lower economic growth are identified, geographic and demographic data can be combined to establish micro level diagnostics of these underdeveloped area. The insights from the data can help plan an effective intervention program. Machine Learning can be further used to identify potential donors, investors and contributors across the globe based on their skill-set, interest, history, ethnicity, purchasing power and their native connect to the location of the proposed program. Adequate resource allocation and efficient design of the program will also not guarantee success of a program unless the project execution is supervised at grass-root level. Data Analytics can be used to monitor project progress, effectiveness and detect anomaly in case of any fraud or mismanagement of funds.

  • Liked Antrixsh Gupta
    keyboard_arrow_down

    Antrixsh Gupta - Creating Custom Interactive Data Visualization Dashboards with Bokeh

    90 Mins
    Workshop
    Beginner

    This will be a hands-on workshop how to build a custom interactive dashboard application on your local machine or on any cloud service provider. You will also learn how to deploy this application with both security and scalability in mind.

    Powerful Data visualization software solutions are extremely useful when building interactive data visualization dashboards. However, these types of solutions might not provide sufficient customization options. For those scenarios, you can use open source libraries like D3.js, Chart.js, or Bokeh to create custom dashboards. While these libraries offer a lot of flexibility for building dashboards with tailored features and visualizations.

  • Liked Anupam Purwar
    keyboard_arrow_down

    Anupam Purwar - Prediction of Wilful Default using Machine Learning

    45 Mins
    Case Study
    Intermediate

    Banks and financial institutes in India over the last few years have increasingly faced defaults by corporates. In fact, NBFC stocks have suffered huge losses in recent times. It has triggered a contagion which spilled over to other financial stocks too and adversely affected benchmark indices resulting in short term bearishness. This makes it imperative to investigate ways to prevent rather than cure such situations. However, the banks face a twin-faced challenge in terms of identifying the probable wilful defaulters from the rest and moral hazard among the bank employees who are many a time found to be acting on behest of promoters of defaulting firms. The first challenge is aggravated by the fact that due diligence of firms before the extension of loan is a time-consuming process and the second challenge hints at the need for placement of automated safeguards to reduce mal-practises originating out of the human behaviour. To address these challenges, the automation of loan sanctioning process is a possible solution. Hence, we identified important firmographic variables viz. financial ratios and their historic patterns by looking at the firms listed as dirty dozen by Reserve Bank of India. Next, we used k-means clustering to segment these firms and label them into various categories viz. normal, distressed defaulter and wilful defaulter. Besides, we utilized text and sentiment analysis to analyze the annual reports of all BSE and NSE listed firms over the last 10 years. From this, we identified word tags which resonate well with the occurrence of default and are indicators of financial performance of these firms. A rigorous analysis of these word tags (anagrams, bi-grams and co-located words) over a period of 10 years for more than 100 firms indicate the existence of a relation between frequency of word tags and firm default. Lift estimation of firmographic financial ratios namely Altman Z score and frequency of word tags for the first time uncovers the importance of text analysis in predicting financial performance of firms and their default. Our investigation also reveals the possibility of using neural networks as a predictor of firm default. Interestingly, the neural network developed by us utilizes the power of open source machine learning libraries and throws open possibilities of deploying such a neural network model by banks with a small one-time investment. In short, our work demonstrates the ability of machine learning in addressing challenges related to prevention of wilful default. We envisage that the implementation of neural network based prediction models and text analysis of firm-specific financial reports could help financial industry save millions in recovery and restructuring of loans.

  • 20 Mins
    Demonstration
    Advanced

    In this digital era when the attention span of customers is reducing drastically, for a marketer it is imperative to understand the following 4 aspects more popularly known as "The 4R's of Marketing" if they want to increase our ROI:

    - Right Person

    - Right Time

    - Right Content

    - Right Channel

    Only when we design and send our campaigns in such a way, that it reaches the right customers at the right time through the right channel telling them about stuffs they like or are interested in ... can we expect higher conversions with lower investment. This is a problem that most of the organizations need to solve for to stay relevant in this age of high market competition.

    Among all these we will put special focus on appropriate content generation based on targeted user base using Markov based models and do a quick hack session.

    The time breakup can be:

    5 mins : Difference between Martech and traditional marketing. The 4R's of marketing and why solving for them is crucial

    5 mins : What is Smart Segments and how to solve for it, with a short demo

    5 mins : How marketers use output from Smart Segments to execute targeted campaigns

    5 mins: What is STO, how it can be solved and what is the performance uplift seen by clients when they use it

    5 mins: What is Channel Optimization, how it can be solved and what is the performance uplift seen by clients when they use it

    5 mins: Why sending the right message to customers is crucial, and introduction to appropriate content creation

    15 mins: Covering different Text generation nuances, and a live demo with walk through of a toy code implementation

  • Liked Sunil Jacob
    keyboard_arrow_down

    Sunil Jacob - Automated Recognition of Handwritten Digits in Indian Bank Cheques

    Sunil Jacob
    Sunil Jacob
    Sr. Architect
    Philips
    schedule 2 weeks ago
    Sold Out!
    45 Mins
    Case Study
    Beginner

    Handwritten digit recognition and pattern analysis are one of the active research topics in digital image processing. Moreover, automatic handwritten digit recognition is of great technical interest and academic interest.

    In today’s digital realm, banks cheques are widely used around the world for various financial transactions. A rough estimate says that almost 120+ billion cheques move around the world. In the Indian banking scenario, CTS cheque clearance system has come. Even though the check is cleared quickly, there is still manual intervention needed to validate the date and amount fields. There is a lot of manual effort in this area.

    This case study, followed by a demo, will parade on how handwritten date and amount fields were extracted and validated. By adopting this automated way of recognising handwritten digits, banks can cut down the manual time and increase speed in their process. Although this is still in the proof of concept phase, this feat was achieved using computer vision and image processing techniques.

    This case study will briefly cover:

    • Detection of bounding and taking the region of interest
    • Fragment and Identify technique
    • Checking the accuracy of bounding box using Intersection over Union technique

    This case study/approach can be extended to other operative environments, where handwritten digits recognition is needed.

  • Liked Raunak Bhandari
    keyboard_arrow_down

    Raunak Bhandari / Ankit Desai / Usha Rengaraju - Knowledge Graph from Natural Language: Incorporating order from textual chaos

    90 Mins
    Workshop
    Advanced

    Intro

    What If I told you that instead of the age-old saying that "a picture is worth a thousand words", it could be that "a word is worth a thousand pictures"?

    Language evolved as an abstraction of distilled information observed and collected from the environment for sophisticated and efficient interpersonal communication and is responsible for humanity's ability to collaborate by storing and sharing experiences. Words represent evocative abstractions over information encoded in our memory and are a composition of many primitive information types.

    That is why language processing is a much more challenging domain and witnessed a delayed 'imagenet' moment.

    One of the cornerstone applications of natural language processing is to leverage the language's inherent structural properties to build a knowledge graph of the world.

    Knowledge Graphs

    Knowledge graph is a form of a rich knowledge base which represents information as an interconnected web of entities and their interactions with each other. This naturally manifests as a graph data structure, where nodes represent entities and the relationship between them are the edges.

    Automatically constructing and leveraging it in an intelligent system is an AI-hard problem, and an amalgamation of a wide variety of fields like natural language processing, information extraction and retrieval, graph algorithms, deep learning, etc.

    It represents a paradigm shift for artificial intelligence systems by going beyond deep learning driven pattern recognition and towards more sophisticated forms of intelligence rooted in reasoning to solve much more complicated tasks.

    To elucidate the differences between reasoning and pattern recognition: consider the problem of computer vision: the vision stack processes an image to detect shapes and patterns in order to identify objects - this is pattern recognition, whereas reasoning is much more complex - to associate detected objects with each other in order to meaningfully describe a scene. For this to be accomplished, a system needs to have a rich understanding of the entities within the scene and their relationships with each other.

    To understand a scene where a person is drinking a can of cola, a system needs to understand concepts like people, that they drink certain liquids via their mouths, liquids can be placed into metallic containers which can be held within a palm to be consumed, and the generational phenomenon that is cola, among others. A sophisticated vision system can then use this rich understanding to fetch details about cola in-order to alert the user of his calorie intake, or to update preferences for a customer. A Knowledge Graph's 'awareness' of the world phenomenons can thus be used to augment a vision system to facilitate such higher order semantic reasoning.

    In production systems though, reasoning may be cast into a pattern recognition problem by limiting the scope of the system for feasibility, but this may be insufficient as the complexity of the system scales or we try to solve general intelligence.

    Challenges in building a Knowledge Graph

    There are two primary challenges towards integrating knowledge graphs in systems: acquisition of knowledge and construction of the graph and effectively leveraging it with robust algorithms to solve reasoning tasks. Creation of the knowledge graph can vary widely depending on the breadth and complexity of the domain - from just manual curation to automatically constructing it by leveraging unstructured/semi-structured sources of knowledge, like books and Wikipedia.

    Many natural language processing tasks are precursors towards building knowledge graphs from unstructured text, like syntactic parsing, information extraction, entity linking, named entity recognition, relationship extraction, semantic parsing, semantic role labeling, entity disambiguation, etc. Open information extraction is an active area of research on extracting semantic triplets of object ('John'), predicate ('eats'), subject ('burger') from plain text, which are used to build the knowledge graph automatically.

    A very interesting approach to this problem is the extraction of frame semantics. Frame semantics relates linguistic semantics to encyclopedic knowledge and the basic idea is that the meaning of a word is linked to all essential knowledge that relates to it, for eg. to understand the word "sell", it's necessary to also know about commercial transactions, which involve a seller, buyer, goods, payment, and the relations between these, which can be represented in a knowledge graph.

    This workshop will focus on building such a knowledge graph from unstructured text.

    Learn good research practices like organizing code and modularizing output for productive data wrangling to improve algorithm performance.

    Knowledge Graph at Embibe

    We will showcase how Embibe's proprietary Knowledge Graph manifests and how it's leveraged across a multitude of projects in our Data Science Lab.

  • Liked Kshitij Srivastava
    keyboard_arrow_down

    Kshitij Srivastava / Manikant Prasad - Data Science in Containers

    45 Mins
    Case Study
    Beginner

    Containers are all the rage in the DevOps arena.

    This session is a live demonstration of how the data team at Milliman uses containers at each step in their data science workflow -

    1) How do containerized environments speed up data scientists at the data exploration stage

    2) How do containers enable rapid prototyping and validation at the modeling stage

    3) How do we put containerized models on production

    4) How do containers make it easy for data scientists to do DevOps

    5) How do containers make it easy for data scientists to host a data science dashboard with continuous integration and continuous delivery

  • Liked Dr. Neha Sehgal
    keyboard_arrow_down

    Dr. Neha Sehgal - Open Data Science for Smart Manufacturing

    45 Mins
    Talk
    Intermediate

    Open Data offers a tremendous opportunity in transformation of today’s manufacturing sector to smarter manufacturing. Smart Manufacturing initiatives include digitalising production processes and integrating IoT technologies for connecting machines to collect data for analysis and visualisation.

    In this talk, an understanding of linkage between various industries within manufacturing sector through lens of Open Data Science will be illustrated. The data on manufacturing sector companies, company profiles, officers and financials will be scraped from UK Open Data API’s. The work I plan to showcase in ODSC is part of UK Made Smarter Project, where the work has been useful for major aerospace alliances to find out the champions and strugglers (SMEs) within manufacturing sector based on the open data gathered from multiple sources. The talk includes discussion on data extraction, data cleaning, data transformation - transforming raw financial information about companies to key metrics of interest - and further data analytics to create clusters of manufacturing companies into "Champions" and "Strugglers". The talk showcased examples of powerful R Shiny based dashboards of interest for suppliers, manufacturer and other key stakeholders in supply chain network.

    Further analysis includes network analysis for industries, clustering and deploying the model as an API using Google Cloud Platform. The presenter will discuss about the necessity of 'Analytical Thinking' approach as an aid to handle complex big data projects and how to overcome challenges while working with real-life data science projects.

  • Liked Gaurav Godhwani
    keyboard_arrow_down

    Gaurav Godhwani / Swati Jaiswal - Fantastic Indian Open Datasets and Where to Find Them

    45 Mins
    Case Study
    Beginner

    With the big boom in Data Science and Analytics Industry in India, a lot of data scientists are keen on learning a variety of learning algorithms and data manipulation techniques. At the same time, there is this growing interest among data scientists to give back to the society, harness their acquired skills and help fix some of the major burning problems in the nation. But how does one go about finding meaningful datasets connecting to societal problems and plan data-for-good projects? This session will summarize our experience of working in Data-for-Good sector in last 5 years, sharing few interesting datasets and associated use-cases of employing machine learning and artificial intelligence in social sector. Indian social sector is replete with good volume of open data on attributes like annotated images, geospatial information, time-series, Indic languages, Satellite Imagery, etc. We will dive into understanding journey of a Data-for-Good project, getting essential open datasets and understand insights from certain data projects in development sector. Lastly, we will explore how we can work with various communities and scale our algorithmic experiments in meaningful contributions.