Machine learning and deep learning have been rapidly adopted in providing solutions to various real life problems. If you wish to build scalable machine learning/deep learning-powered solutions, you need to understand how to use tools to build them.

The TensorFlow is an open source machine learning framework. It enables the use of data flow graphs for numerical computations, with automatic parallelization across several CPUs, GPUs or TPUs. Its architecture makes it ideal for implementing neural networks and other machine learning/deep learning algorithms.

This tutorial will provide hands-on exposure to implement the most important and fundamental principles of machine learning and deep learning using TensorFlow.

2 favorite thumb_down thumb_up 0 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist

Outline/Structure of the Tutorial

  • Introduction to TensorFlow
    • Why TensorFlow?
    • TensorFlow Installation
    • TensorFlow Basic Examples
  • Basic Neural Networks using TensorFlow
  • Deep Neural Networks using TensorFlow
  • Demonstration of Deep Learning based Healthcare Application using TensorFlow
  • Future Research Directions
  • Conclusions

Learning Outcome

After attending this tutorial, participants will be able to…

  • Understand TensorFlow’s computation graph approach, elements and built-in functions.

  • Build machine learning/deep learning models using TensorFlow libraries.

  • Develop machine learning/deep learning based applications using TensorFlow.

Target Audience

Students, faculty members, researchers as well as Industrialists who are working in the field of machine learning/deep learning or wish to start building machine learning/deep learning based applications.

Prerequisites for Attendees

  • Familiarity with fundamentals of machine Learning and matrices

  • No experience with TensorFlow required

schedule Submitted 4 days ago

Public Feedback

comment Suggest improvements to the Speaker

  • Liked Viral B. Shah

    Viral B. Shah - Growing a compiler - Getting to ML from the general-purpose Julia compiler

    45 Mins

    Since we originally proposed the need for a first-class language, compiler and ecosystem for machine learning (ML) - a view that is increasingly shared by many, there have been plenty of interesting developments in the field. Not only have the tradeoffs in existing systems, such as TensorFlow and PyTorch, not been resolved, but they are clearer than ever now that both frameworks contain distinct "static graph" and "eager execution" interfaces. Meanwhile, the idea of ML models fundamentally being differentiable algorithms – often called differentiable programming – has caught on.

    Where current frameworks fall short, several exciting new projects have sprung up that dispense with graphs entirely, to bring differentiable programming to the mainstream. Myia, by the Theano team, differentiates and compiles a subset of Python to high-performance GPU code. Swift for TensorFlow extends Swift so that compatible functions can be compiled to TensorFlow graphs. And finally, the Flux ecosystem is extending Julia’s compiler with a number of ML-focused tools, including first-class gradients, just-in-time CUDA kernel compilation, automatic batching and support for new hardware such as TPUs.

    This talk will demonstrate how Julia is increasingly becoming a natural language for machine learning, the kind of libraries and applications the Julia community is building, the contributions from India (there are many!), and our plans going forward.

  • Liked Anupam Purwar

    Anupam Purwar - An Industrial IoT system for wireless instrumentation: Development, Prototyping and Testing

    45 Mins

    The next generation machinery viz. turbines, aircraft and boilers will rely heavily on smart data acquisition and monitoring to meet their performance and reliability requirements. These systems require the accurate acquisition of various parameters like pressure, temperature and heat flux in real time for structural health monitoring, automation and intelligent control. This calls for the use of sophisticated instrumentation to measure these parameters and transmit them in real time. In the present work, a wireless sensor network (WSN) based on a novel high-temperature thermocouple cum heat flux sensor has been proposed. The architecture of this WSN has been evolved keeping in mind its robustness, safety and affordability. WiFi communication protocol based on IEEE 802.11 b/g/n specification has been utilized to create a secure and low power WSN. The thermocouple cum heat flux sensor and instrumentation enclosure have been designed using rigorous finite element modelling. The sensor and wireless transmission unit have been housed in an enclosure capable of withstanding temperature and pressure in the range of 100 bars and 2500K respectively. The sensor signal is conditioned before being passed to the wireless ESP8266 based ESP12E transmitter, which transmits data to a web server. This system uploads the data to a cloud database in real time. Thus, providing seamless data availability to decision maker sitting across the globe without any time lag and with ultra-low power consumption. The real-time data is envisaged to be used for structural health monitoring of hot structures by identifying patterns of temperature rise which have historically resulted in damage using Machine learning (ML). Such type of ML application can save millions of dollars wasted in the replacement and maintenance of industrial equipment by alerting the engineers in real time.

  • Liked Anupam Purwar

    Anupam Purwar - Prediction of Wilful Default using Machine Learning

    45 Mins
    Case Study

    Banks and financial institutes in India over the last few years have increasingly faced defaults by corporates. In fact, NBFC stocks have suffered huge losses in recent times. It has triggered a contagion which spilled over to other financial stocks too and adversely affected benchmark indices resulting in short term bearishness. This makes it imperative to investigate ways to prevent rather than cure such situations. However, the banks face a twin-faced challenge in terms of identifying the probable wilful defaulters from the rest and moral hazard among the bank employees who are many a time found to be acting on behest of promoters of defaulting firms. The first challenge is aggravated by the fact that due diligence of firms before the extension of loan is a time-consuming process and the second challenge hints at the need for placement of automated safeguards to reduce mal-practises originating out of the human behaviour. To address these challenges, the automation of loan sanctioning process is a possible solution. Hence, we identified important firmographic variables viz. financial ratios and their historic patterns by looking at the firms listed as dirty dozen by Reserve Bank of India. Next, we used k-means clustering to segment these firms and label them into various categories viz. normal, distressed defaulter and wilful defaulter. Besides, we utilized text and sentiment analysis to analyze the annual reports of all BSE and NSE listed firms over the last 10 years. From this, we identified word tags which resonate well with the occurrence of default and are indicators of financial performance of these firms. A rigorous analysis of these word tags (anagrams, bi-grams and co-located words) over a period of 10 years for more than 100 firms indicate the existence of a relation between frequency of word tags and firm default. Lift estimation of firmographic financial ratios namely Altman Z score and frequency of word tags for the first time uncovers the importance of text analysis in predicting financial performance of firms and their default. Our investigation also reveals the possibility of using neural networks as a predictor of firm default. Interestingly, the neural network developed by us utilizes the power of open source machine learning libraries and throws open possibilities of deploying such a neural network model by banks with a small one-time investment. In short, our work demonstrates the ability of machine learning in addressing challenges related to prevention of wilful default. We envisage that the implementation of neural network based prediction models and text analysis of firm-specific financial reports could help financial industry save millions in recovery and restructuring of loans.

  • 90 Mins

    Deep learning has been widely adopted in data science. A deep learning model learns to perform classification tasks directly from images, text or sound. The model is trained using a large set of labeled data and neural network architectures that contain many layers.

    Keras is one of the most powerful and easy-to-use open-source libraries for developing and evaluating deep learning models. It is a high-level neural network API, capable of running on top of low-level library such as TensorFlow, Theano and CNTK. It enables fast experimentation through a high level, user-friendly, modular and extensible API. Keras code is portable and can be run on both CPU and GPU.

    This workshop will provide hands-on exposure to implement deep learning models using Keras.

  • 45 Mins

    Artificial Intelligence (AI) has been rapidly adopted in various spheres of medicine such as microbiological analysis, discovery of drug, disease diagnosis, Genomics, medical imaging and bioinformatics for translating biomedical data into improved human healthcare. Healthcare using AI is amongst the fastest growing research area across the globe and hence, a lot of research is being carried out in this area by researchers of technical and medical sectors as well as industrialists. Automation in healthcare using machine learning/deep learning assists physicians to make faster, cheaper and more accurate diagnoses.

    In this session, I shall demonstrate three healthcare applications developed by us using deep learning. Moreover, I shall discuss the scope of research in this area.