Lifting Up: Deep Learning for implementing anti-hunger and anti-poverty programs

schedule Aug 8th 03:30 - 04:15 PM place Grand Ball Room 1 people 18 Interested

Ending poverty and zero hunger are top two goals United Nations aims to achieve by 2030 under its sustainable development program. Hunger and poverty are byproducts of multiple factors and fighting them require multi-fold effort from all stakeholders. Artificial Intelligence and Machine learning has transformed the way we live, work and interact. However economics of business has limited its application to few segments of the society. A much conscious effort is needed to bring the power of AI to the benefits of the ones who actually need it the most – people below the poverty line. Here we present our thoughts on how deep learning and big data analytics can be combined to enable effective implementation of anti-poverty programs. The advancements in deep learning , micro diagnostics combined with effective technology policy is the right recipe for a progressive growth of a nation. Deep learning can help identify poverty zones across the globe based on night time images where the level of light correlates to higher economic growth. Once the areas of lower economic growth are identified, geographic and demographic data can be combined to establish micro level diagnostics of these underdeveloped area. The insights from the data can help plan an effective intervention program. Machine Learning can be further used to identify potential donors, investors and contributors across the globe based on their skill-set, interest, history, ethnicity, purchasing power and their native connect to the location of the proposed program. Adequate resource allocation and efficient design of the program will also not guarantee success of a program unless the project execution is supervised at grass-root level. Data Analytics can be used to monitor project progress, effectiveness and detect anomaly in case of any fraud or mismanagement of funds.

 
27 favorite thumb_down thumb_up 6 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Outline/Structure of the Case Study

  • Introducing Poverty Trap
  • Deep Learning framework to Identify Underdeveloped Areas
  • Micro-level diagnostics framework using Machine Learning and Big data Analytics
  • Key Insights for Intervention Programs
  • Machine Learning and Big Data for donors and volunteers lead generation and conversion - key data-sets
  • Data capturing for Future of research in Poverty and Hunger eradication
  • Conclusion

Learning Outcome

  1. Power of satellite image processing to identify the lower economy zones
  2. Understanding how demographic and geographic data can be used to gather micro level insights from these poverty zones.
  3. Application of Machine learning for fund growth and transparent fund management

Target Audience

Volunteers and NGOs looking for technology solutions to make their programs more effective and efficient,Data Scientists, Data Analysts, Deep Learning Engineers, Machine Learning Engineers, Economists , Technology Policy Makers, Intervention Design Engineers, Social Scientists.

Prerequisites for Attendees

  • Curiosity
  • Empathy
  • Familiarity with Transfer Learning and Deep Learning basic concepts. (Not mandatory though )
schedule Submitted 3 months ago

Public Feedback

comment Suggest improvements to the Speaker
  • Nirav Shah
    By Nirav Shah  ~  2 months ago
    reply Reply

    Hello Shalini,Badri and Usha

    Thank you for your submission. Can you please give the breakdown of time for each speaker? It's a 45 minute talk and we would like to understand each speaker's role

    Thanks

    Nirav

    • Usha Rengaraju
      By Usha Rengaraju  ~  2 months ago
      reply Reply

      Hi Nirav,

      Dr.Badri Narayanan Gopalakrishnan,PhD , our distinguised speaker will give an expert talk in economic and inter-disciplinary modeling for policy and strategy  of anti-hunger and anti-poverty programs.He will talk about his experiences as a Founder of Infinite Sum Modelling ,leading economic modeling firm which specializes in  providing advice to a wide range of clients including government and non-governmental organizations, and companies and also from being an Advisor to World Bank, FAO, UN, European Commission, Governments of India and USA, WHO, PWC, KPMG, several academic and research institutions all over the world. Poverty , Employment , Urbanization , Disaster ,Employment and Agriculture are few of the core areas of expertise at Infinite Sum Modelling.

      Shalini Sinha (Director of Data Science  @Numerify) , has deep passion for solving societal problems through Data Science and Analytics. She has also co authored a paper which involves using AI to solve common problems of household. She will talk about the technological implementation of how Data Analytics can be used to monitor project progress, effectiveness and detect anomaly in case of any fraud or mismanagement of funds. She will be showing a demo of the anomaly detection framework which has been implemented in a NGO.

      Usha will talk about the technological implementation of Deep learning can help identify poverty zones and micro level diagnostics of under developed areas. 

      All three of us will have 15 minute time slot.

      However we request the program committe to consider giving us an additional 15 minute time slot for Dr.Badri (Expert talk) . I am sure all of us at ODSC will immensely benefit from his rich expertise in this area.

      Thanks and Regards,

      Usha Rengaraju

       

       

      • Nirav Shah
        By Nirav Shah  ~  2 months ago
        reply Reply

        Thanks Usha for the detailed breakdown. We will keep you posted.

        Regards

        Nirav

  • Rohit Madan
    By Rohit Madan  ~  2 months ago
    reply Reply

    I only wanted to share that the motivation behind the initiative is excellent and I support it whole heartedly.

  • Dipanjan Sarkar
    By Dipanjan Sarkar  ~  3 months ago
    reply Reply

    This looks to be a very good topic around AI for social good. Is it possible to elaborate a bit more on the following

    • The major components in the end-to-end flow for this solution starting from scanning the areas to actions which would be taken and the intermediate stages
    • The methodologies and techniques around AI\ML which would be used in each of the stages

    Even bullet points\a few lines covering each component\phase would be good and would help us gain a bit more perspective and maybe refine the structure of the talk a bit more to make it more cohesive.

    • Shalini Sinha
      By Shalini Sinha  ~  3 months ago
      reply Reply

      Thanks Dipanjan for the feedback. Our proposal is not to cover Anti poverty intervention programs rather to provide insights from ML solutions to  large-scale NGOs working in this area to help them with scalability issues that they would face while working on such programs across the globe

      Three key solution that we propose here

      1. Identification of poverty zones using night time satellite images DL model and combining them with demographic data available in public domain - population, religion, local language, literacy, local climate, potential calamities, nearest Urban Centre, health-care facilities, etc to build actionable insights for designing programs because root-cause of poverty could be different in different geographies.

      2. Lead propensity Model for generation of potential donor list - whats additional here is to use the geography, ethnicity, education, information available in social media and other channel to identify donors who connect to the poverty zone. 

      3. Anamoly detection model for week-by-week tracking of project progress data, fund utilization and images

       


  • Liked Viral B. Shah
    keyboard_arrow_down

    Viral B. Shah - Growing a compiler - Getting to ML from the general-purpose Julia compiler

    45 Mins
    Keynote
    Intermediate

    Since we originally proposed the need for a first-class language, compiler and ecosystem for machine learning (ML) - a view that is increasingly shared by many, there have been plenty of interesting developments in the field. Not only have the tradeoffs in existing systems, such as TensorFlow and PyTorch, not been resolved, but they are clearer than ever now that both frameworks contain distinct "static graph" and "eager execution" interfaces. Meanwhile, the idea of ML models fundamentally being differentiable algorithms – often called differentiable programming – has caught on.

    Where current frameworks fall short, several exciting new projects have sprung up that dispense with graphs entirely, to bring differentiable programming to the mainstream. Myia, by the Theano team, differentiates and compiles a subset of Python to high-performance GPU code. Swift for TensorFlow extends Swift so that compatible functions can be compiled to TensorFlow graphs. And finally, the Flux ecosystem is extending Julia’s compiler with a number of ML-focused tools, including first-class gradients, just-in-time CUDA kernel compilation, automatic batching and support for new hardware such as TPUs.

    This talk will demonstrate how Julia is increasingly becoming a natural language for machine learning, the kind of libraries and applications the Julia community is building, the contributions from India (there are many!), and our plans going forward.

  • Liked Dr. Vikas Agrawal
    keyboard_arrow_down

    Dr. Vikas Agrawal - Non-Stationary Time Series: Finding Relationships Between Changing Processes for Enterprise Prescriptive Systems

    45 Mins
    Talk
    Intermediate

    It is too tedious to keep on asking questions, seek explanations or set thresholds for trends or anomalies. Why not find problems before they happen, find explanations for the glitches and suggest shortest paths to fixing them? Businesses are always changing along with their competitive environment and processes. No static model can handle that. Using dynamic models that find time-delayed interactions between multiple time series, we need to make proactive forecasts of anomalous trends of risks and opportunities in operations, sales, revenue and personnel, based on multiple factors influencing each other over time. We need to know how to set what is “normal” and determine when the business processes from six months ago do not apply any more, or only applies to 35% of the cases today, while explaining the causes of risk and sources of opportunity, their relative directions and magnitude, in the context of the decision-making and transactional applications, using state-of-the-art techniques.

    Real world processes and businesses keeps changing, with one moving part changing another over time. Can we capture these changing relationships? Can we use multiple variables to find risks on key interesting ones? We will take a fun journey culminating in the most recent developments in the field. What methods work well and which break? What can we use in practice?

    For instance, we can show a CEO that they would miss their revenue target by over 6% for the quarter, and tell us why i.e. in what ways has their business changed over the last year. Then we provide the prioritized ordered lists of quickest, cheapest and least risky paths to help turn them over the tide, with estimates of relative costs and expected probability of success.

  • Liked Juan Manuel Contreras
    keyboard_arrow_down

    Juan Manuel Contreras - Beyond Individual Contribution: How to Lead Data Science Teams

    Juan Manuel Contreras
    Juan Manuel Contreras
    Head of Data Science
    Even
    schedule 3 months ago
    Sold Out!
    45 Mins
    Talk
    Advanced

    Despite the increasing number of data scientists who are being asked to take on managerial and leadership roles as they grow in their careers, there are still few resources on how to manage data scientists and lead data science teams. There is also scant practical advice on how to serve as head of a data science practice: how to set a vision and craft a strategy for an organization to use data science.

    In this talk, I will describe my experience as a data science leader both at a political party (the Democratic Party of the United States of America) and at a fintech startup (Even.com), share lessons learned from these experiences and conversations with other data science leaders, and offer a framework for how new data science leaders can better transition to both managing data scientists and heading a data science practice.

  • Liked Dr. C.S.Jyothirmayee
    keyboard_arrow_down

    Dr. C.S.Jyothirmayee / Usha Rengaraju / Vijayalakshmi Mahadevan - Deep learning powered Genomic Research

    90 Mins
    Workshop
    Advanced

    The event disease happens when there is a slip in the finely orchestrated dance between physiology, environment and genes. Treatment with chemicals (natural, synthetic or combination) solved some diseases but others persisted and got propagated along the generations. Molecular basis of disease became prime center of studies to understand and to analyze root cause. Cancer also showed a way that origin of disease, detection, prognosis and treatment along with cure was not so uncomplicated process. Treatment of diseases had to be done case by case basis (no one size fits).

    With the advent of next generation sequencing, high through put analysis, enhanced computing power and new aspirations with neural network to address this conundrum of complicated genetic elements (structure and function of various genes in our systems). This requires the genomic material extraction, their sequencing (automated system) and analysis to map the strings of As, Ts, Gs, and Cs which yields genomic dataset. These datasets are too large for traditional and applied statistical techniques. Consequently, the important signals are often incredibly small along with blaring technical noise. This further requires far more sophisticated analysis techniques. Artificial intelligence and deep learning gives us the power to draw clinically useful information from the genetic datasets obtained by sequencing.

    Precision of these analyses have become vital and way forward for disease detection, its predisposition, empowers medical authorities to make fair and situationally decision about patient treatment strategies. This kind of genomic profiling, prediction and mode of disease management is useful to tailoring FDA approved treatment strategies based on these molecular disease drivers and patient’s molecular makeup.

    Now, the present scenario encourages designing, developing, testing of medicine based on existing genetic insights and models. Deep learning models are helping to analyze and interpreting tiny genetic variations ( like SNPs – Single Nucleotide Polymorphisms) which result in unraveling of crucial cellular process like metabolism, DNA wear and tear. These models are also responsible in identifying disease like cancer risk signatures from various body fluids. They have the immense potential to revolutionize healthcare ecosystem. Clinical data collection is not streamlined and done in a haphazard manner and the requirement of data to be amenable to a uniform fetchable and possibility to be combined with genetic information would power the value, interpretation and decisive patient treatment modalities and their outcomes.

    There is hugh inflow of medical data from emerging human wearable technologies, along with other health data integrated with ability to do quickly carry out complex analyses on rich genomic databases over the cloud technologies … would revitalize disease fighting capability of humans. Last but still upcoming area of application in direct to consumer genomics (success of 23andMe).

    This road map promises an end-to-end system to face disease in its all forms and nature. Medical research, and its applications like gene therapies, gene editing technologies like CRISPR, molecular diagnostics and precision medicine could be revolutionized by tailoring a high-throughput computing method and its application to enhanced genomic datasets.

  • Liked Dipanjan Sarkar
    keyboard_arrow_down

    Dipanjan Sarkar - Explainable Artificial Intelligence - Demystifying the Hype

    Dipanjan Sarkar
    Dipanjan Sarkar
    Data Scientist
    Red Hat
    schedule 6 months ago
    Sold Out!
    45 Mins
    Tutorial
    Intermediate

    The field of Artificial Intelligence powered by Machine Learning and Deep Learning has gone through some phenomenal changes over the last decade. Starting off as just a pure academic and research-oriented domain, we have seen widespread industry adoption across diverse domains including retail, technology, healthcare, science and many more. More than often, the standard toolbox of machine learning, statistical or deep learning models remain the same. New models do come into existence like Capsule Networks, but industry adoption of the same usually takes several years. Hence, in the industry, the main focus of data science or machine learning is more ‘applied’ rather than theoretical and effective application of these models on the right data to solve complex real-world problems is of paramount importance.

    A machine learning or deep learning model by itself consists of an algorithm which tries to learn latent patterns and relationships from data without hard-coding fixed rules. Hence, explaining how a model works to the business always poses its own set of challenges. There are some domains in the industry especially in the world of finance like insurance or banking where data scientists often end up having to use more traditional machine learning models (linear or tree-based). The reason being that model interpretability is very important for the business to explain each and every decision being taken by the model.However, this often leads to a sacrifice in performance. This is where complex models like ensembles and neural networks typically give us better and more accurate performance (since true relationships are rarely linear in nature).We, however, end up being unable to have proper interpretations for model decisions.

    To address and talk about these gaps, I will take a conceptual yet hands-on approach where we will explore some of these challenges in-depth about explainable artificial intelligence (XAI) and human interpretable machine learning and even showcase with some examples using state-of-the-art model interpretation frameworks in Python!

  • Liked Gaurav Godhwani
    keyboard_arrow_down

    Gaurav Godhwani / Swati Jaiswal - Fantastic Indian Open Datasets and Where to Find Them

    45 Mins
    Case Study
    Beginner

    With the big boom in Data Science and Analytics Industry in India, a lot of data scientists are keen on learning a variety of learning algorithms and data manipulation techniques. At the same time, there is this growing interest among data scientists to give back to the society, harness their acquired skills and help fix some of the major burning problems in the nation. But how does one go about finding meaningful datasets connecting to societal problems and plan data-for-good projects? This session will summarize our experience of working in Data-for-Good sector in last 5 years, sharing few interesting datasets and associated use-cases of employing machine learning and artificial intelligence in social sector. Indian social sector is replete with good volume of open data on attributes like annotated images, geospatial information, time-series, Indic languages, Satellite Imagery, etc. We will dive into understanding journey of a Data-for-Good project, getting essential open datasets and understand insights from certain data projects in development sector. Lastly, we will explore how we can work with various communities and scale our algorithmic experiments in meaningful contributions.

  • Liked Akshay Bahadur
    keyboard_arrow_down

    Akshay Bahadur - Minimizing CPU utilization for deep networks

    Akshay Bahadur
    Akshay Bahadur
    SDE-I
    Symantec Softwares
    schedule 5 months ago
    Sold Out!
    45 Mins
    Demonstration
    Beginner

    The advent of machine learning along with its integration with computer vision has enabled users to efficiently to develop image-based solutions for innumerable use cases. A machine learning model consists of an algorithm which draws some meaningful correlation between the data without being tightly coupled to a specific set of rules. It's crucial to explain the subtle nuances of the network along with the use-case we are trying to solve. With the advent of technology, the quality of the images has increased which in turn has increased the need for resources to process the images for building a model. The main question, however, is to discuss the need to develop lightweight models keeping the performance of the system intact.
    To connect the dots, we will talk about the development of these applications specifically aimed to provide equally accurate results without using much of the resources. This is achieved by using image processing techniques along with optimizing the network architecture.
    These applications will range from recognizing digits, alphabets which the user can 'draw' at runtime; developing state of the art facial recognition system; predicting hand emojis, developing a self-driving system, detecting Malaria and brain tumor, along with Google's project of 'Quick, Draw' of hand doodles.
    In this presentation, we will discuss the development of such applications with minimization of CPU usage.

  • Liked Tanuj Jain
    keyboard_arrow_down

    Tanuj Jain - Taming the Spark beast for Deep Learning predictions at scale

    45 Mins
    Talk
    Intermediate

    Predicting at scale is a challenging pursuit, especially when working with Deep Learning models. This is because Deep Learning models tend to have high inference time. At idealo.de, Germany's biggest price comparison platform, the Data Science team was tasked with carrying out image tagging to improve our product galleries.

    One of the biggest challenges we faced was to generate predictions for more than 300 million images within a short time while keeping the costs low. Moreover, a resolution for the scaling problem became critical since we intended to apply other Deep Learning models on the same big dataset. We ended up formulating a batch-prediction solution by employing an Apache Spark setup that ran on an AWS EMR cluster.

    Spark is notorious for being difficult to configure and tune. As a result, we had to carry on several optimisation steps in order to meet the scale requirements that adhered to our time and financial constraints. In this talk, I would present our Spark setup and focus on the journey of optimising the Spark tagging solution. Additionally, I would also talk briefly about the underlying deep learning model which was used to predict the image tags.

  • Liked Gaurav Shekhar
    keyboard_arrow_down

    Gaurav Shekhar - AIOps - Prediction of Critical Events

    45 Mins
    Case Study
    Beginner

    With the rise of cloud, distributed architectures, containers, and microservices, a rise in data overload is visible. With growing amounts of DevOps processes; alerts, repeated mundane jobs etc. have put new demands to both synthesize meaning from this influx of information and connect it to broader business objectives.

    AIOps is the application of artificial intelligence for IT operations. AIOps uses machine learning and data science to give IT operations teams a real-time understanding of any issues affecting the availability or performance of the systems under their care. Rather than reacting to issues as they arise in the application environment, AIOps platforms allow IT operations teams to proactively manage performance challenges faster, and in real-time

    This case study focuses on solving the following business needs:

    1. With an ever-increasing rise in alerts, a large number of incidents were getting generated. There was a need to develop a framework that can generate correlations and identify correlated events, thereby reduce overall incidents volume.

    2. For many incidents a reactive strategy does not work and can lead to a loss of reputation; there was a need to develop predictive capabilities that can detect anomalous events and predict critical events well in advance.

    3. Given the pressures of reducing the Resolution time and short window of opportunity available to the analysts, there was a need to provide search capabilities so that the analysts can have a head start as to how similar incidents were solved in past.

    Data from multiple systems sending alerts, including traditional IT monitoring, log events in text format, application and network performance data etc were made available for the PoC.

    The solution framework developed had a discovery phase where the base data was visualized and explored, a NLP driven text mining layer where log data in text format was pre-processed, clustered and correlations were developed to identify related events using Machine Learning algorithms. Topic Mining was used to get a quick overview of a large number of event data. Next, a temporal mining layer explored the temporal relationship between nodes and cluster groups, necessary features were developed on top of the associations generated from temporal layers. Advanced Machine learning algorithms were then developed on these features to predict critical events almost 12 hours in advance. Last but not the least a search layer that computed the similarity of any incident with those in Service Now database was developed that provided analysts insights readily available information on similar incidents and how they were solved in past so that the analysts do not have to reinvent the wheel.

  • Liked Shalini Sinha
    keyboard_arrow_down

    Shalini Sinha / Ashok J / Yogesh Padmanaban - Hybrid Classification Model with Topic Modelling and LSTM Text Classifier to identify key drivers behind Incident Volume

    45 Mins
    Case Study
    Intermediate

    Incident volume reduction is one of the top priorities for any large-scale service organization along with timely resolution of incidents within the specified SLA parameters. AI and Machine learning solutions can help IT service desk manage the Incident influx as well as resolution cost by

    • Identifying major topics from incident description and planning resource allocation and skill-sets accordingly
    • Producing knowledge articles and resolution summary of similar incidents raised earlier
    • Analyzing Root Causes of incidents and introducing processes and automation framework to predict and resolve them proactively

    We will look at different approaches to combine standard document clustering algorithms such as Latent Dirichlet Allocation (LDA) and K-mean clustering on doc2vec along-with Text classification to produce easily interpret-able document clusters with semantically coherent/ text representation that helped IT operations of a large FMCG client identify key drivers/topics contributing towards incident volume and take necessary action on it.

  • Liked Antrixsh Gupta
    keyboard_arrow_down

    Antrixsh Gupta - Creating Custom Interactive Data Visualization Dashboards with Bokeh

    90 Mins
    Workshop
    Beginner

    This will be a hands-on workshop how to build a custom interactive dashboard application on your local machine or on any cloud service provider. You will also learn how to deploy this application with both security and scalability in mind.

    Powerful Data visualization software solutions are extremely useful when building interactive data visualization dashboards. However, these types of solutions might not provide sufficient customization options. For those scenarios, you can use open source libraries like D3.js, Chart.js, or Bokeh to create custom dashboards. While these libraries offer a lot of flexibility for building dashboards with tailored features and visualizations.

  • Liked Favio Vázquez
    keyboard_arrow_down

    Favio Vázquez - Complete Data Science Workflows with Open Source Tools

    90 Mins
    Tutorial
    Beginner

    Cleaning, preparing , transforming, exploring data and modeling it's what we hear all the time about data science, and these steps maybe the most important ones. But that's not the only thing about data science, in this talk you will learn how the combination of Apache Spark, Optimus, the Python ecosystem and Data Operations can form a whole framework for data science that will allow you and your company to go further, and beyond common sense and intuition to solve complex business problems.

  • Liked Indranil Basu
    keyboard_arrow_down

    Indranil Basu - Machine Generation of Recommended Image from Human Speech

    45 Mins
    Talk
    Advanced

    Introduction:

    Synthesizing audio for specific domains has many practical applications in creative sound design for music and film. But the application is not restricted to entertainment industry. We propose an architecture that will convert audio (human voice) to the voice owner’s preferred image – for the time being we restrict the intended images to two domains – Object Design and Human body. Many times, human beings are unable to describe a design (may be power-point presentation or interior decoration of a house) or a known person by verbally described attributes as they are able to visualise the same design or the person. But the other person, the listener may be unable to interpret the object or human descriptions from the speaker’s verbal descriptions as he/she is not visualising the same. Complete communication thus needs much of a trial and error and overall hazardous and time consuming. Examples of such situations are 1) While making presentation, an executive or manager can visualise something and an express to his/her employee to make the same. But, making the best slides from manger’s description may not be proper. Another relevant example is that a house owner or office owner wants his/her premises to have certain design which he/she can visualise and express to the concerned vendor. But the vendor may not be able to produce the same. Also, trial and error in this case is highly expensive. Having an automated Image, recommended to him/her can address this problem. 2) Verbal description of a terrorist or criminal suspect (facial description and/or attribute) may not be always available to all the security people every time, in Airports or Railway Stations or sensitive areas. Presence of a software system having Machine Generated Image with Ranked Recommendation for such suspect can immediately point to one or very few people in a crowded Airport or even Railway Station or any such sensitive place. Security agencies can then frisk only those people or match their attributes with existing database. This can avoid hazardous manual checking of every people in the same process and can help the security agencies to do adequate checking for those recommended individuals.

    We can use a Sequential Architecture consisting of simple NLP and more complex Deep Learning algorithms primarily based on Generative Adversarial Network (GAN) and Neural Personalised Ranking (NPR) to help the object designers and security personnel for serving their specific purposes.

    The idea to combat the problem:

    I propose a combination of Deep Learning and Recommender System approach to tackle this problem. Architecture of the Solution model consists of 4 major Components – 1) Speech to Text

    2) Text Classification into Person or Design; 3) Text to Image Formation; 4) Recommender System

    We are trying to address these four steps in consecutive applications of effective Machine Learning and Deep Learning Algorithms. Deep Learning community has already been able to make significant progress in terms of Text to Image generation and also in Ranking based Recommender System

    Brief Details about the four major pillars of this problem:

    Deep Learning based Speech Recognition – Primary technique for Speech to text could be Baidu’s DeepSpeech for which a Tensorflow implementation is readily available. Also, Google Cloud Speech-to-Text enables the develop to convert Voice to text. Voice of the user needs to be converted in .wav file. Our steps for Deep-Speech-2 are like this – Fixing GPU memory, Adding Batch normalization to RNN, implement row Convolution layer and generate text.

    Nowadays, we have quite a few free Speech to Text software, e.g. Google Docs Voice typing, windows Speech Recognition, Speech-notes etc.

    Text Classification of Content – This is needed to classify the converted text into two classes – a) Design Description or b) Human Attribute Description because these two applications and therefore image types are different. This may be Statistically easier part, but its importance is immense. A Dictionary of words related to Designs and Personal Attributes can be built using online available resources. Then, a supervised algorithm using tf-idf and Latent Semantic Analysis (LSA) should be able to classify the text into two classes – Object and Person. These are very much traditional and proven techniques in many NLP research

    Text to Image Formation – This is our main component for this proposal. Today, one of the most challenging problems in the world of Computer Vision is synthesizing high-quality images from text descriptions. In recent years, GANs have been found to generate good results. Samples generated by existing text-to-image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. There have been a few approaches to address this problem, all using GAN. One of those is given as Stacked Generative Adversarial Networks (StackGAN). Heart of such approaches is Conditional GAN which is an extension of GAN where both generator and discriminator receive additional conditioning variables c, yielding G(z, c) and D(x, c). This formulation allows G to generate images conditioned on variables c.

    In our case, we train deep convolutional generative adversarial network (DC-GAN) conditioned on text features. These text features are encoded by a hybrid character-level convolutional-recurrent neural network. Overall, DC-GAN uses text embeddings where the context of a word is of prime importance. Class label determined in the earlier step will be of help in this case. This will simply help DC-GAN to generate more relevant images than irrelevant ones. Details will be discussed during the talk

    The most straightforward way to train a conditional GAN is to view (text, image) pairs as joint observations and train the discriminator to judge pairs as real or fake. The discriminator has no explicit notion of whether real training images match the text embedding context. To account for this, in GAN-CLS, in addition to the real/fake inputs to the discriminator during training, a third type of input consisting of real images with mismatched text is added, which the discriminator must learn to score as fake. By learning to optimize image/text matching in addition to the image realism, the discriminator can provide an additional signal to the generator. (details are in talk)

    Image Recommender System – In the last step, we propose personalised image recommendation for the user from the set of images generated by GAN-CLS architecture. Image Recommendation brings down the number of choice of images to a top N (N=3, 5, 10 ideally) with a rank given to each of those and therefore user finds it easier to choose. In this case, we propose Neural Personalized Ranking (NPR) – a personalized pairwise ranking model over implicit feedback datasets – that is inspired by Bayesian Personalized Ranking (BPR) and recent advances in neural networks. We like to mention that, now NPR is improved to contextual enhanced NPR. This enhanced Model depends on implicit feedbacks from the users, its contexts and incorporates the idea of generalized matrix factorization. Contextual NPR significantly outperforms its competitors

    In the presentation, we shall describe the complete sequence in detail

  • Liked Pankaj Kumar
    keyboard_arrow_down

    Pankaj Kumar / Abinash Panda / Usha Rengaraju - Quantitative Finance :Global macro trading strategy using Probabilistic Graphical Models

    90 Mins
    Workshop
    Advanced

    { This is a handson workshop in pgmpy package. The creator of pgmpy package Abinash Panda will do the code demonstration }

    Crude oil plays an important role in the macroeconomic stability and it heavily influences the performance of the global financial markets. Unexpected fluctuations in the real price of crude oil are detrimental to the welfare of both oil-importing and oil-exporting economies.Global macro hedge-funds view forecast the price of oil as one of the key variables in generating macroeconomic projections and it also plays an important role for policy makers in predicting recessions.

    Probabilistic Graphical Models can help in improving the accuracy of existing quantitative models for crude oil price prediction as it takes in to account many different macroeconomic and geopolitical variables .

    Hidden Markov Models are used to detect underlying regimes of the time-series data by discretising the continuous time-series data. In this workshop we use Baum-Welch algorithm for learning the HMMs, and Viterbi Algorithm to find the sequence of hidden states (i.e. the regimes) given the observed states (i.e. monthly differences) of the time-series.

    Belief Networks are used to analyse the probability of a regime in the Crude Oil given the evidence as a set of different regimes in the macroeconomic factors . Greedy Hill Climbing algorithm is used to learn the Belief Network, and the parameters are then learned using Bayesian Estimation using a K2 prior. Inference is then performed on the Belief Networks to obtain a forecast of the crude oil markets, and the forecast is tested on real data.

  • Liked Saikat Sarkar
    keyboard_arrow_down

    Saikat Sarkar / Dhanya Parameshwaran / Dr Sweta Choudhary / Raunak Bhandari / Srikanth Ramaswamy / Usha Rengaraju - AI meets Neuroscience

    480 Mins
    Workshop
    Advanced

    This is a mixer workshop with lot of clinicians , medical experts , Neuroimaging experts ,Neuroscientists, data scientists and statisticians will come under one roof to bring together this revolutionary workshop.

    The theme will be updated soon .

    Our celebrity and distinguished presenter Srikanth Ramaswamy who is an advisor at Mysuru Consulting Group and also works Blue Brain Project at the EPFL will be delivering an expert talk in the workshop.

    https://www.linkedin.com/in/ramaswamysrikanth/

    { This workshop will be a combination of panel discussions , expert talk and neuroimaging data science workshop ( applying machine learning and deep learning algorithms to Neuroimaging data sets}

    { We are currently onboarding several experts from Neuroscience domain --Neurosurgeons , Neuroscientists and Computational Neuroscientists .Details of the speakers will be released soon }

    Abstract for the Neuroimaging Data Science Part of the workshop:

    The study of the human brain with neuroimaging technologies is at the cusp of an exciting era of Big Data. Many data collection projects, such as the NIH-funded Human Connectome Project, have made large, high- quality datasets of human neuroimaging data freely available to researchers. These large data sets promise to provide important new insights about human brain structure and function, and to provide us the clues needed to address a variety of neurological and psychiatric disorders. However, neuroscience researchers still face substantial challenges in capitalizing on these data, because these Big Data require a different set of technical and theoretical tools than those that are required for analyzing traditional experimental data. These skills and ideas, collectively referred to as Data Science, include knowledge in computer science and software engineering, databases, machine learning and statistics, and data visualization.

    The workshop covers Data analysis, statistics and data visualization and applying cutting-edge analytics to complex and multimodal neuroimaging datasets . Topics which will be covered in this workshop are statistics, associative techniques, graph theoretical analysis, causal models, nonparametric inference, and meta-analytical synthesis.

  • Liked Raunak Bhandari
    keyboard_arrow_down

    Raunak Bhandari / Ankit Desai / Usha Rengaraju - Knowledge Graph from Natural Language: Incorporating order from textual chaos

    90 Mins
    Workshop
    Advanced

    Intro

    What If I told you that instead of the age-old saying that "a picture is worth a thousand words", it could be that "a word is worth a thousand pictures"?

    Language evolved as an abstraction of distilled information observed and collected from the environment for sophisticated and efficient interpersonal communication and is responsible for humanity's ability to collaborate by storing and sharing experiences. Words represent evocative abstractions over information encoded in our memory and are a composition of many primitive information types.

    That is why language processing is a much more challenging domain and witnessed a delayed 'imagenet' moment.

    One of the cornerstone applications of natural language processing is to leverage the language's inherent structural properties to build a knowledge graph of the world.

    Knowledge Graphs

    Knowledge graph is a form of a rich knowledge base which represents information as an interconnected web of entities and their interactions with each other. This naturally manifests as a graph data structure, where nodes represent entities and the relationship between them are the edges.

    Automatically constructing and leveraging it in an intelligent system is an AI-hard problem, and an amalgamation of a wide variety of fields like natural language processing, information extraction and retrieval, graph algorithms, deep learning, etc.

    It represents a paradigm shift for artificial intelligence systems by going beyond deep learning driven pattern recognition and towards more sophisticated forms of intelligence rooted in reasoning to solve much more complicated tasks.

    To elucidate the differences between reasoning and pattern recognition: consider the problem of computer vision: the vision stack processes an image to detect shapes and patterns in order to identify objects - this is pattern recognition, whereas reasoning is much more complex - to associate detected objects with each other in order to meaningfully describe a scene. For this to be accomplished, a system needs to have a rich understanding of the entities within the scene and their relationships with each other.

    To understand a scene where a person is drinking a can of cola, a system needs to understand concepts like people, that they drink certain liquids via their mouths, liquids can be placed into metallic containers which can be held within a palm to be consumed, and the generational phenomenon that is cola, among others. A sophisticated vision system can then use this rich understanding to fetch details about cola in-order to alert the user of their calorie intake, or to update preferences for a customer. A Knowledge Graph's 'awareness' of the world phenomenons can thus be used to augment a vision system to facilitate such higher order semantic reasoning.

    In production systems though, reasoning may be cast into a pattern recognition problem by limiting the scope of the system for feasibility, but this may be insufficient as the complexity of the system scales or we try to solve general intelligence.

    Challenges in building a Knowledge Graph

    There are two primary challenges towards integrating knowledge graphs in systems: acquisition of knowledge and construction of the graph and effectively leveraging it with robust algorithms to solve reasoning tasks. Creation of the knowledge graph can vary widely depending on the breadth and complexity of the domain - from just manual curation to automatically constructing it by leveraging unstructured/semi-structured sources of knowledge, like books and Wikipedia.

    Many natural language processing tasks are precursors towards building knowledge graphs from unstructured text, like syntactic parsing, information extraction, entity linking, named entity recognition, relationship extraction, semantic parsing, semantic role labeling, entity disambiguation, etc. Open information extraction is an active area of research on extracting semantic triplets of object ('John'), predicate ('eats'), subject ('burger') from plain text, which are used to build the knowledge graph automatically.

    A very interesting approach to this problem is the extraction of frame semantics. Frame semantics relates linguistic semantics to encyclopedic knowledge and the basic idea is that the meaning of a word is linked to all essential knowledge that relates to it, for eg. to understand the word "sell", it's necessary to also know about commercial transactions, which involve a seller, buyer, goods, payment, and the relations between these, which can be represented in a knowledge graph.

    This workshop will focus on building such a knowledge graph from unstructured text.

    Learn good research practices like organizing code and modularizing output for productive data wrangling to improve algorithm performance.

    Knowledge Graph at Embibe

    We will showcase how Embibe's proprietary Knowledge Graph manifests and how it's leveraged across a multitude of projects in our Data Science Lab.

  • Liked Shrutika Poyrekar
    keyboard_arrow_down

    Shrutika Poyrekar / kiran karkera / Usha Rengaraju - Introduction to Bayesian Networks

    90 Mins
    Workshop
    Advanced

    { This is a handson workshop . The use case is Traffic analysis . }

    Most machine learning models assume independent and identically distributed (i.i.d) data. Graphical models can capture almost arbitrarily rich dependency structures between variables. They encode conditional independence structure with graphs. Bayesian network, a type of graphical model describes a probability distribution among all variables by putting edges between the variable nodes, wherein edges represent the conditional probability factor in the factorized probability distribution. Thus Bayesian Networks provide a compact representation for dealing with uncertainty using an underlying graphical structure and the probability theory. These models have a variety of applications such as medical diagnosis, biomonitoring, image processing, turbo codes, information retrieval, document classification, gene regulatory networks, etc. amongst many others. These models are interpretable as they are able to capture the causal relationships between different features .They can work efficiently with small data and also deal with missing data which gives it more power than conventional machine learning and deep learning models.

    In this session, we will discuss concepts of conditional independence, d- separation , Hammersley Clifford theorem , Bayes theorem, Expectation Maximization and Variable Elimination. There will be a code walk through of simple case study.

  • Liked Maryam Jahanshahi
    keyboard_arrow_down

    Maryam Jahanshahi - Applying Dynamic Embeddings in Natural Language Processing to Analyze Text over Time

    Maryam Jahanshahi
    Maryam Jahanshahi
    Research Scientist
    TapRecruit
    schedule 6 months ago
    Sold Out!
    45 Mins
    Case Study
    Intermediate

    Many data scientists are familiar with word embedding models such as word2vec, which capture semantic similarity of words in a large corpus. However, word embeddings are limited in their ability to interrogate a corpus alongside other context or over time. Moreover, word embedding models either need significant amounts of data, or tuning through transfer learning of a domain-specific vocabulary that is unique to most commercial applications.

    In this talk, I will introduce exponential family embeddings. Developed by Rudolph and Blei, these methods extend the idea of word embeddings to other types of high-dimensional data. I will demonstrate how they can be used to conduct advanced topic modeling on datasets that are medium-sized, which are specialized enough to require significant modifications of a word2vec model and contain more general data types (including categorical, count, continuous). I will discuss how my team implemented a dynamic embedding model using Tensor Flow and our proprietary corpus of job descriptions. Using both categorical and natural language data associated with jobs, we charted the development of different skill sets over the last 3 years. I will specifically focus the description of results on how tech and data science skill sets have developed, grown and pollinated other types of jobs over time.

  • Liked Sunil Jacob
    keyboard_arrow_down

    Sunil Jacob - Automated Recognition of Handwritten Digits in Indian Bank Cheques

    Sunil Jacob
    Sunil Jacob
    Sr. Architect
    Philips
    schedule 3 months ago
    Sold Out!
    45 Mins
    Case Study
    Beginner

    Handwritten digit recognition and pattern analysis are one of the active research topics in digital image processing. Moreover, automatic handwritten digit recognition is of great technical interest and academic interest.

    In today’s digital realm, banks cheques are widely used around the world for various financial transactions. A rough estimate says that almost 120+ billion cheques move around the world. In the Indian banking scenario, CTS cheque clearance system has come. Even though the check is cleared quickly, there is still manual intervention needed to validate the date and amount fields. There is a lot of manual effort in this area.

    This case study, followed by a demo, will parade on how handwritten date and amount fields were extracted and validated. By adopting this automated way of recognising handwritten digits, banks can cut down the manual time and increase speed in their process. Although this is still in the proof of concept phase, this feat was achieved using computer vision and image processing techniques.

    This case study will briefly cover:

    • Detection of bounding and taking the region of interest
    • Fragment and Identify technique
    • Checking the accuracy of bounding box using Intersection over Union technique

    This case study/approach can be extended to other operative environments, where handwritten digits recognition is needed.

  • Liked Kshitij Srivastava
    keyboard_arrow_down

    Kshitij Srivastava / Manikant Prasad - Data Science in Containers

    45 Mins
    Case Study
    Beginner

    Containers are all the rage in the DevOps arena.

    This session is a live demonstration of how the data team at Milliman uses containers at each step in their data science workflow -

    1) How do containerized environments speed up data scientists at the data exploration stage

    2) How do containers enable rapid prototyping and validation at the modeling stage

    3) How do we put containerized models on production

    4) How do containers make it easy for data scientists to do DevOps

    5) How do containers make it easy for data scientists to host a data science dashboard with continuous integration and continuous delivery