Blockchain with Machine Learning - The ultimate industry disruptor

The fusion of blockchain and machine learning is an ultimate game changer. Machine learning relies on high volume of data to build models for accurate prediction. A lot of the challenges incurred in getting this data lies in collecting, organizing and auditing the data for accuracy. This is an area that can significantly be improved by using blockchain technology. By using smart contracts, data can be directly and reliably transferred straight from its place of origin. Smart contracts could, however, improve the whole process significantly by using digital signatures.

Blockchain is a good candidate to store sensitive information that should not be modified in any way. Machine learning works on the principle of “Garbage In, Garbage Out,” which means that if the data that was used to build a prediction model was corrupted in any way, the resultant model would not be of much use either. Combining both these technologies creates an industry disruptor which leverages the power of both Blockchain and Machine learning.


Outline/Structure of the Talk

1. Introduction to Blockchain with Machine Learning

2. Advantages of combining these two technologies

3. Cloud integration with Microsoft Azure

4. Industry use cases from FinTech, Logistics and Healthcare sectors

5. Demo

Learning Outcome

Participants will be able to understand how these two technologies are creating an impact across Industries. Biggest take away will be seeing the Industry level use cases and benefits.

Cloud implementation will also be demoed as part of this session.

Target Audience

Machine Learning and Blockchain enthusiast and practitioners

Prerequisites for Attendees

1. Basics of Blockchain and Machine learning

2. Data Wrangling approaches

3. Some knowledge of Cloud

schedule Submitted 9 months ago

Public Feedback

comment Suggest improvements to the Speaker
  • afitech org
    By afitech org  ~  1 month ago
    reply Reply

    I like these aspects of the submission, and they should be retained:

    • ...

      afitech is one of the best  training center in Bangalore (BTM) to learn machine learning
      click here for more information  

    I think the submission could be improved by:

    • ...
  • Anoop Kulkarni
    By Anoop Kulkarni  ~  8 months ago
    reply Reply

    Also, another curious question. Is the demo that is talked about only of a cloud implementation? as in only using Azure? Would you be able to showcase data being fed using blockchain for the demo?



    • Varun Sharma
      By Varun Sharma  ~  8 months ago
      reply Reply

      Hi Anoop, thank you for your review and the comments. My updates below:

      1. Primarily I will be showcasing cloud (Azure) for Blockchain implementation. Along with this I can also demo how to create a small Blockchain in R/Python.

      2. I think there is no point adding GCP in addition to Azure, as it can be little overwhelming to the participants.

      3. Yes, the Predictive Analytics will use the Blockchain distributed ledger data. 

      Please let me know if this answer your queries? Thank you.

      Quick question - I have submitted another proposal 'Automated Machine Learning' as well. Not getting any review comments for that, is that not been considered? I think that is also a very interesting topic for deep dive. 

  • Dr. Vikas Agrawal
    By Dr. Vikas Agrawal  ~  8 months ago
    reply Reply

    Dear Varun: I would love to hear about what specific problem statements that are shown in this talk that are solved which become an industry disruptor. What specific algorithms or combinations thereof do you plan to demonstrate? Are you planning a deep dive into those?

    Warm Regards


    • Varun Sharma
      By Varun Sharma  ~  8 months ago
      reply Reply

      Hello Dr. Vikas, Thank you for reviewing the proposal. Below are my points:

      1. Case studies:

      I will talk about Ripple (RippleNet) for global money transfer. And Corda  (Cordapp) banks distributed ledger in Azure Blockchain.

      Another case study will be about AIG Blockchain linked Insurance policy.

      2. These cases and the blockchain distributed ledger/data stores on Azure

      3. Using Azure Machine Learning Studio then we do 'Text Analytics'

      a. Feature Hashing

      b. Named entity recognition

      c. Vowpal Wabbit

      I am open for DeepDive or just a Techtalk. For Deep Dive I would need a 90 mins slot, else 45 mins works. 

      Please let me know your thoughts/questions.

  • Liked Dr. Saptarsi Goswami

    Dr. Saptarsi Goswami - Mastering feature selection: basics for developing your own algorithm

    45 Mins

    Feature selection is one of the most important processes for pattern recognition, machine learning and data mining problems. A successful feature selection method facilitates improvement of learning model performance and interpretability as well as reduces computational cost of the classifier by dimensionality reduction of the data. Feature selection is computationally expensive and becomes intractable even for few 100 features. This is a relevant problem because text, image and next generation sequence data all are inherently high dimensional. In this talk, I will discuss about few algorithms we have developed in last 5/6 years. Firstly, we will set the context of feature selection ,with some open issues , followed by definition and taxonomy. Which will take about 20 odd minutes. Then in next 20 minutes we will discuss couple of research efforts where we have improved feature selection for textual data and proposed a graph based mechanism to view the feature interaction. After the talk, participants will be appreciate the need of feature selection, the basic principles of feature selection algorithm and finally how they can start developing their own models

  • Liked Rahee Walambe

    Rahee Walambe / Vishal Gokhale - Processing Sequential Data using RNNs

    480 Mins

    Data that forms the basis of many of our daily activities like speech, text, videos has sequential/temporal dependencies. Traditional deep learning models, being inadequate to model this connectivity needed to be made recurrent before they brought technologies such as voice assistants (Alexa, Siri) or video based speech translation (Google Translate) to a practically usable form by reducing the Word Error Rate (WER) significantly. RNNs solve this problem by adding internal memory. The capacities of traditional neural networks are bolstered with this addition and the results outperform the conventional ML techniques wherever the temporal dynamics are more important.
    In this full-day immersive workshop, participants will develop an intuition for sequence models through hands-on learning along with the mathematical premise of RNNs.


    JAYA SUSAN MATHEW - Breaking the language barrier: how do we quickly add multilanguage support in our AI application?

    Sr. Data Scientist
    schedule 1 year ago
    Sold Out!
    20 Mins

    With the need to cater to a global audience, there is a growing demand for applications to support speech identification/translation/transliteration from one language to another. This session aims at introducing the audience to the topic, learn the inner working of the AI/ML models and eventually how to quickly use some of the readily available APIs to identify, translate or even transliterate speech/text within their application.

  • Liked Parul pandey

    Parul pandey - Jupyter Ascending : The journey from Jupyter Notebook to Jupyter Lab

    Parul pandey
    Parul pandey
    Data Science Communicator
    schedule 9 months ago
    Sold Out!
    45 Mins

    For many of the researchers and data scientists, Jupyter Notebooks are the de-facto platform when it comes to quick prototyping and exploratory analysis. Right from Paul Romer- the Ex-World bank chief Economist and also the co-winner 2018 Nobel prize in Economics to Netflix, Jupyter Notebooks are used almost everywhere. The browser-based computing environment, coupled with a reproducible document format has made them the choice of tool for millions of data scientists and researchers around the globe. But have we fully exploited the benefits of Jupyter Notebooks and do we know all about the best practises of using it? if not, then this talk is just for you.

    Through this talk/demo, I'll like to discuss three main points:

    1. Best Practises for Jupyter Notebooks since a lot of Jupyter functionalities sometimes lies under the hood and is not adequately explored. We will try and explore Jupyter Notebooks’ features which can enhance our productivity while working with them.
    2. In this part, we get acquainted with Jupyter Lab, the next-generation UI developed by the Project Jupyter team, and its emerging ecosystem of extensions. JupyterLab differs from Jupyter Notebook in the fact that it provides a set of core building blocks for interactive computing (e.g. notebook, terminal, file browser, console) and well-designed interfaces for them that allow users to combine them in novel ways. The new interface enables users to do new things in their interactive computing environment, like tiled layouts for their activities, dragging cells between notebooks, and executing markdown code blocks in a console and many more cool things.
    3. Every tool/features come with their set of pros and cons and so does Jupyter Notebooks/Lab and it is equally important to discuss the pain areas along with the good ones.