schedule Aug 10th 10:10 - 11:40 AM place Jupiter 1 people 29 Interested

Machine learning and deep learning have been rapidly adopted in various spheres of medicine such as discovery of drug, disease diagnosis, Genomics, medical imaging and bioinformatics for translating biomedical data into improved human healthcare. Machine learning/deep learning based healthcare applications assist physicians to make faster, cheaper and more accurate diagnosis.

We have successfully developed three deep learning based healthcare applications and are currently working on two more healthcare related projects. In this workshop, we will discuss one healthcare application titled "Deep Learning based Craniofacial Distance Measurement for Facial Reconstructive Surgery" which is developed by us using TensorFlow. Craniofacial distances play important role in providing information related to facial structure. They include measurements of head and face which are to be measured from image. They are used in facial reconstructive surgeries such as cephalometry, treatment planning of various malocclusions, craniofacial anomalies, facial contouring, facial rejuvenation and different forehead surgeries in which reliable and accurate data are very important and cannot be compromised.

Our discussion on healthcare application will include precise problem statement, the major steps involved in the solution (deep learning based face detection & facial landmarking and craniofacial distance measurement), data set, experimental analysis and challenges faced & overcame to achieve this success. Subsequently, we will provide hands-on exposure to implement this healthcare solution using TensorFlow. Finally, we will briefly discuss the possible extensions of our work and the future scope of research in healthcare sector.

 
 

Outline/Structure of the Workshop

  • Significance of Deep Learning for Healthcare Solutions (10 mins)
  • Discussion of Healthcare Application - 'Deep Learning based Craniofacial Distance Measurement for Facial Reconstructive Surgery' (20 mins)
    • Craniofacial distances and their application in facial reconstructive surgeries (5 mins)
    • Issues in conventional method of measuring craniofacial distances (3 mins)
    • Problem statement (2 mins)
    • Proposed solution (5 mins)
    • Introduction to TensorFlow components and program structure (5 mins)
  • Hands-on Healthcare Application (Craniofacial Distance Measurement for Facial Reconstructive Surgery) using TensorFlow (50 mins)
    • Getting started with Google Colaboratory (5 mins)
    • Practice sample programs using TensorFlow in Google Colaboratory (10 min)
    • Explanation of dataset (5 min)
    • Justification for Python libraries/packages used (5 min)
    • Implementation of healthcare application using pretrained CNN (10 min)
    • Implementation of healthcare application using CE_CLM model (10 min)
    • Results and discussion (5 mins)
  • Future Research Directions (5 mins)
  • Q & A (5 mins)

Learning Outcome

After attending this workshop, participants will get an overview of TensorFlow and how to build machine learning/deep learning based application using TensorFlow.

Target Audience

Data Scientists, Machine Learning/Deep Learning Practitioners, Python Programmers, Doctors, Researchers, Students & Faculty Members from sectors such as Engineering and Technology, Medical

Prerequisites for Attendees

OR

Download files from https://github.com/drmayurimehta/Tensorflow and https://drive.google.com/file/d/145g_UsAyk1n8-WXQg9OOaS69q0dbCqub/view?usp=sharing.

Upload all these files in the folder named 'ODSC2019' in your Google drive.

schedule Submitted 4 months ago

Public Feedback

comment Suggest improvements to the Speaker
  • Naresh Jain
    By Naresh Jain  ~  2 months ago
    reply Reply

    Dr. Mayuri,

    Can you please provide a time-wise breakup of the workshop outline? 

    I'm concerned the hands-on part will be too short to do justice to the topic. Can you please share some details on what exactly you are planning for the hands-on part?

    • Dr. Mayuri Mehta
      By Dr. Mayuri Mehta  ~  2 months ago
      reply Reply

      Dear Mr. Nilesh,

      Kindly find time-wise breakup of our workshop as below:

      • Significance of Deep Learning for Healthcare Solutions (10 mins)
      • Demonstration of Healthcare Applications (30 mins)
        1. A Deep Learning based Automated Approach to Detect Dry Eye Disease
        2. Deep Learning based Craniofacial Distance Measurement for Facial Reconstructive Surgery
      • Hands-on Deep Neural Network based Healthcare Application using TensorFlow (45 mins)
      • Future Research Directions (5 mins)

      I understand your concern regarding the time constraint for the hands-on part in our workshop. We had earlier proposed two separate proposals – one for demonstration of TWO healthcare applications and another as tutorial for the hands-on of the same TWO applications. However, as per the suggestion received from a member of the program committee, we revised our proposals and combined both the above proposals into ONE workshop.

      We made the following changes in our proposal to adjust the revised timings without compromising the effectiveness of our workshop.

      • Earlier we planned to demonstrate TWO healthcare applications in 45 mins demonstration session. Now the same will be covered in 30 mins with specific, to the point focus on significant steps of the applications. It is necessary to demonstrate and discuss these applications before we conduct hands-on session to make the attendees understand the major steps involved in development of healthcare applications.
      • Major change to fit within the timeline – earlier we planned 90 mins hands-on tutorial for two applications. But now we will conduct 45 mins hands-on session for ONE application (Deep Learning based Craniofacial Distance Measurement for Facial Reconstructive Surgery). 

      We have made above changes because we are sure that the revised proposal will be more appealing, educative and advantageous as it is more inclusive (demonstration and hands-on of the same application in single workshop). 

      We will appreciate further suggestions from your side, if any.

      Thank you.

      • Naresh Jain
        By Naresh Jain  ~  2 months ago
        reply Reply

        Thanks for the prompt response and sorry about the confusion. 

        Just thinking out loud, would it be possible to do the following:

        • Significance of Deep Learning for Healthcare Solutions (10 mins) - Skip this
        • Demonstration of Deep Learning based Automated Approach to Detect Dry Eye Disease - 10 mins
        • Hands-on session on this specific problem and data set - 30 mins (Get the participants to work on a specific task. Entire configuration and setup to be made available online beforehand so participants can get started immediately)
        • Demonstration of Deep Learning based Craniofacial Distance Measurement for Facial Reconstructive Surgery - 10 mins
        • Hands-on session on this specific problem and data set - 30 mins
        • Future Research Directions - 5 mins
        • Q & A - 5 mins

        If you agree, can you please update the proposal with the same. Also, request you to please detail out the activity the participants would be doing in each of the 30 mins hands-on session.

        • Dr. Mayuri Mehta
          By Dr. Mayuri Mehta  ~  2 months ago
          reply Reply

          Dear Naresh (apologies to get your name wrongly in previous reply),

          Thank you for the suggestions. However, Please find our views on same as below:

          • Participants may have general knowledge about deep learning. However, we feel that it is necessary to update them about the significance of deep learning specifically for  healthcare sector.
          • We strongly feel that a thorough hands-on session on one healthcare application from scratch will give participants better insights and practical exposure which will give them more confidence. It would be difficult to achieve the same objective in short time if we cover two applications.
          • Also we need at least 30 mins to demonstrate two applications as demonstration of each application includes problem statement, proposed solution, data set, empirical analysis and general challenges faced.
          • We agree with your suggestion to include 5 min for Q & A session and will do the needful.

          Kindly confirm if we can proceed as above, so that we can update proposal accordingly.

          In hands-on session, participants will implement healthcare application following its key steps from scratch. We will provide them necessary configuration and set up for TensorFlow beforehand, so that they can come to attend workshop with necessary experimental setup. We will share the required data set with them during the  workshop.

          • Naresh Jain
            By Naresh Jain  ~  2 months ago
            reply Reply

            Request you to please update the proposal with what you think is best. Based on that the program committee will take a final call.

            My 2 cents: If you want to focus on demo, just stick to the demo. Don't add the hands-on session and complicate it. If you really want to do hands-on, then maybe pick only 1 case study.

            • Dr. Mayuri Mehta
              By Dr. Mayuri Mehta  ~  2 months ago
              reply Reply

              Dear Mr. Naresh,

              As per our earlier views and also as per your suggestion, we are keeping  our two proposals separate - one of demonstration (Proposal title: Demonstration of Deep Learning based Healthcare Applications) and another of workshop (Proposal title: Building Deep Learning based Healthcare Application using TensorFlow). As these healthcare applications have been developed by us, we can do justice to both demonstration as well as hands-on and so we are keeping both the proposals separate.

              We have incorporated your suggestion of keeping the demonstration part separate than the hands-on, and have also considered only one case study in depth for hands-on including configuration and set up for hands-on.

              Kindly review our updated proposals.

              Thank you.

  • By  ~  4 months ago
    reply Reply

    Hi, Nicely organized and detailed slide deck. Session seems interesting.

  • Dipanjan Sarkar
    By Dipanjan Sarkar  ~  4 months ago
    reply Reply

    Hi, thanks for the submission, can we get some specifics on what are the exact use-cases \ case-studies from healthcare which would be covered in this proposed session?

    • Dr. Mayuri Mehta
      By Dr. Mayuri Mehta  ~  4 months ago
      reply Reply

      Hello Dipanjan, I will mainly focus on following topics in this session:

      1. Detection of disease from images/video using deep learning based model (Case Study: Detection of Dry Eye Disease Analyzing Eye Images/Video)

      2. Deep learning based precise human body measurements from human image for reconstructive surgeries (Case Study: Craniofacial Measurements ( Facial Index and Nasal Index) for Facial Reconstructive Surgery. For this case study, we have collected data of population of Gujarat. In future, we aim to consider population of West India and then India)

      Thank you.

      • Sandhya Harikumar
        By Sandhya Harikumar  ~  4 months ago
        reply Reply

        What are the prerequisites to attend this session

        • Dr. Mayuri Mehta
          By Dr. Mayuri Mehta  ~  4 months ago
          reply Reply

          Hello Sandhya, 

          You should know the fundamentals of machine learning to attend this session. 


  • 45 Mins
    Keynote
    Intermediate

    Since we originally proposed the need for a first-class language, compiler and ecosystem for machine learning (ML) - a view that is increasingly shared by many, there have been plenty of interesting developments in the field. Not only have the tradeoffs in existing systems, such as TensorFlow and PyTorch, not been resolved, but they are clearer than ever now that both frameworks contain distinct "static graph" and "eager execution" interfaces. Meanwhile, the idea of ML models fundamentally being differentiable algorithms – often called differentiable programming – has caught on.

    Where current frameworks fall short, several exciting new projects have sprung up that dispense with graphs entirely, to bring differentiable programming to the mainstream. Myia, by the Theano team, differentiates and compiles a subset of Python to high-performance GPU code. Swift for TensorFlow extends Swift so that compatible functions can be compiled to TensorFlow graphs. And finally, the Flux ecosystem is extending Julia’s compiler with a number of ML-focused tools, including first-class gradients, just-in-time CUDA kernel compilation, automatic batching and support for new hardware such as TPUs.

    This talk will demonstrate how Julia is increasingly becoming a natural language for machine learning, the kind of libraries and applications the Julia community is building, the contributions from India (there are many!), and our plans going forward.

  • Liked Dr. C.S.Jyothirmayee
    keyboard_arrow_down

    Dr. C.S.Jyothirmayee / Usha Rengaraju / Vijayalakshmi Mahadevan - Deep learning powered Genomic Research

    90 Mins
    Workshop
    Advanced

    The event disease happens when there is a slip in the finely orchestrated dance between physiology, environment and genes. Treatment with chemicals (natural, synthetic or combination) solved some diseases but others persisted and got propagated along the generations. Molecular basis of disease became prime center of studies to understand and to analyze root cause. Cancer also showed a way that origin of disease, detection, prognosis and treatment along with cure was not so uncomplicated process. Treatment of diseases had to be done case by case basis (no one size fits).

    With the advent of next generation sequencing, high through put analysis, enhanced computing power and new aspirations with neural network to address this conundrum of complicated genetic elements (structure and function of various genes in our systems). This requires the genomic material extraction, their sequencing (automated system) and analysis to map the strings of As, Ts, Gs, and Cs which yields genomic dataset. These datasets are too large for traditional and applied statistical techniques. Consequently, the important signals are often incredibly small along with blaring technical noise. This further requires far more sophisticated analysis techniques. Artificial intelligence and deep learning gives us the power to draw clinically useful information from the genetic datasets obtained by sequencing.

    Precision of these analyses have become vital and way forward for disease detection, its predisposition, empowers medical authorities to make fair and situationally decision about patient treatment strategies. This kind of genomic profiling, prediction and mode of disease management is useful to tailoring FDA approved treatment strategies based on these molecular disease drivers and patient’s molecular makeup.

    Now, the present scenario encourages designing, developing, testing of medicine based on existing genetic insights and models. Deep learning models are helping to analyze and interpreting tiny genetic variations ( like SNPs – Single Nucleotide Polymorphisms) which result in unraveling of crucial cellular process like metabolism, DNA wear and tear. These models are also responsible in identifying disease like cancer risk signatures from various body fluids. They have the immense potential to revolutionize healthcare ecosystem. Clinical data collection is not streamlined and done in a haphazard manner and the requirement of data to be amenable to a uniform fetchable and possibility to be combined with genetic information would power the value, interpretation and decisive patient treatment modalities and their outcomes.

    There is hugh inflow of medical data from emerging human wearable technologies, along with other health data integrated with ability to do quickly carry out complex analyses on rich genomic databases over the cloud technologies … would revitalize disease fighting capability of humans. Last but still upcoming area of application in direct to consumer genomics (success of 23andMe).

    This road map promises an end-to-end system to face disease in its all forms and nature. Medical research, and its applications like gene therapies, gene editing technologies like CRISPR, molecular diagnostics and precision medicine could be revolutionized by tailoring a high-throughput computing method and its application to enhanced genomic datasets.

  • Liked Johnu George
    keyboard_arrow_down

    Johnu George / Ramdoot Kumar P - A Scalable Hyperparameter Optimization framework for ML workloads

    20 Mins
    Demonstration
    Intermediate

    In machine learning, hyperparameters are parameters that governs the training process itself. For example, learning rate, number of hidden layers, number of nodes per layer are typical hyperparameters for neural networks. Hyperparameter Tuning is the process of searching the best hyper parameters to initialize the learning algorithm, thus improving training performance.

    We present Katib, a scalable and general hyper parameter tuning framework based on Kubernetes which is ML framework agnostic (Tensorflow, Pytorch, MXNet, XGboost etc). You will learn about Katib in Kubeflow, an open source ML toolkit for Kubernetes, as we demonstrate the advantages of hyperparameter optimization by running a sample classification problem. In addition, as we dive into the implementation details, you will learn how to contribute as we expand this platform to include autoML tools.

  • 45 Mins
    Demonstration
    Intermediate

    Recent advancements in AI are proving beneficial in development of applications in various spheres of healthcare sector such as microbiological analysis, discovery of drug, disease diagnosis, Genomics, medical imaging and bioinformatics for translating a large-scale data into improved human healthcare. Automation in healthcare using machine learning/deep learning assists physicians to make faster, cheaper and more accurate diagnoses.

    Due to increasing availability of electronic healthcare data (structured as well as unstructured data) and rapid progress of analytics techniques, a lot of research is being carried out in this area. Popular AI techniques include machine learning/deep learning for structured data and natural language processing for unstructured data. Guided by relevant clinical questions, powerful deep learning techniques can unlock clinically relevant information hidden in the massive amount of data, which in turn can assist clinical decision making.

    We have successfully developed three deep learning based healthcare applications using TensorFlow and are currently working on three more healthcare related projects. In this demonstration session, first we shall briefly discuss the significance of deep learning for healthcare solutions. Next, we will demonstrate two deep learning based healthcare applications developed by us. The discussion of each application will include precise problem statement, proposed solution, data collected & used, experimental analysis and challenges encountered & overcame to achieve this success. Finally, we will briefly discuss the other applications on which we are currently working and the future scope of research in this area.

  • Liked Favio Vázquez
    keyboard_arrow_down

    Favio Vázquez - Complete Data Science Workflows with Open Source Tools

    90 Mins
    Tutorial
    Beginner

    Cleaning, preparing , transforming, exploring data and modeling it's what we hear all the time about data science, and these steps maybe the most important ones. But that's not the only thing about data science, in this talk you will learn how the combination of Apache Spark, Optimus, the Python ecosystem and Data Operations can form a whole framework for data science that will allow you and your company to go further, and beyond common sense and intuition to solve complex business problems.

  • Liked Anupam Purwar
    keyboard_arrow_down

    Anupam Purwar - Prediction of Wilful Default using Machine Learning

    45 Mins
    Case Study
    Intermediate

    Banks and financial institutes in India over the last few years have increasingly faced defaults by corporates. In fact, NBFC stocks have suffered huge losses in recent times. It has triggered a contagion which spilled over to other financial stocks too and adversely affected benchmark indices resulting in short term bearishness. This makes it imperative to investigate ways to prevent rather than cure such situations. However, the banks face a twin-faced challenge in terms of identifying the probable wilful defaulters from the rest and moral hazard among the bank employees who are many a time found to be acting on behest of promoters of defaulting firms. The first challenge is aggravated by the fact that due diligence of firms before the extension of loan is a time-consuming process and the second challenge hints at the need for placement of automated safeguards to reduce mal-practises originating out of the human behaviour. To address these challenges, the automation of loan sanctioning process is a possible solution. Hence, we identified important firmographic variables viz. financial ratios and their historic patterns by looking at the firms listed as dirty dozen by Reserve Bank of India. Next, we used k-means clustering to segment these firms and label them into various categories viz. normal, distressed defaulter and wilful defaulter. Besides, we utilized text and sentiment analysis to analyze the annual reports of all BSE and NSE listed firms over the last 10 years. From this, we identified word tags which resonate well with the occurrence of default and are indicators of financial performance of these firms. A rigorous analysis of these word tags (anagrams, bi-grams and co-located words) over a period of 10 years for more than 100 firms indicate the existence of a relation between frequency of word tags and firm default. Lift estimation of firmographic financial ratios namely Altman Z score and frequency of word tags for the first time uncovers the importance of text analysis in predicting financial performance of firms and their default. Our investigation also reveals the possibility of using neural networks as a predictor of firm default. Interestingly, the neural network developed by us utilizes the power of open source machine learning libraries and throws open possibilities of deploying such a neural network model by banks with a small one-time investment. In short, our work demonstrates the ability of machine learning in addressing challenges related to prevention of wilful default. We envisage that the implementation of neural network based prediction models and text analysis of firm-specific financial reports could help financial industry save millions in recovery and restructuring of loans.

  • Liked Anupam Purwar
    keyboard_arrow_down

    Anupam Purwar - An Industrial IoT system for wireless instrumentation: Development, Prototyping and Testing

    45 Mins
    Talk
    Intermediate

    The next generation machinery viz. turbines, aircraft and boilers will rely heavily on smart data acquisition and monitoring to meet their performance and reliability requirements. These systems require the accurate acquisition of various parameters like pressure, temperature and heat flux in real time for structural health monitoring, automation and intelligent control. This calls for the use of sophisticated instrumentation to measure these parameters and transmit them in real time. In the present work, a wireless sensor network (WSN) based on a novel high-temperature thermocouple cum heat flux sensor has been proposed. The architecture of this WSN has been evolved keeping in mind its robustness, safety and affordability. WiFi communication protocol based on IEEE 802.11 b/g/n specification has been utilized to create a secure and low power WSN. The thermocouple cum heat flux sensor and instrumentation enclosure have been designed using rigorous finite element modelling. The sensor and wireless transmission unit have been housed in an enclosure capable of withstanding temperature and pressure in the range of 100 bars and 2500K respectively. The sensor signal is conditioned before being passed to the wireless ESP8266 based ESP12E transmitter, which transmits data to a web server. This system uploads the data to a cloud database in real time. Thus, providing seamless data availability to decision maker sitting across the globe without any time lag and with ultra-low power consumption. The real-time data is envisaged to be used for structural health monitoring of hot structures by identifying patterns of temperature rise which have historically resulted in damage using Machine learning (ML). Such type of ML application can save millions of dollars wasted in the replacement and maintenance of industrial equipment by alerting the engineers in real time.

  • Liked Maryam Jahanshahi
    keyboard_arrow_down

    Maryam Jahanshahi - Applying Dynamic Embeddings in Natural Language Processing to Analyze Text over Time

    Maryam Jahanshahi
    Maryam Jahanshahi
    Research Scientist
    TapRecruit
    schedule 7 months ago
    Sold Out!
    45 Mins
    Case Study
    Intermediate

    Many data scientists are familiar with word embedding models such as word2vec, which capture semantic similarity of words in a large corpus. However, word embeddings are limited in their ability to interrogate a corpus alongside other context or over time. Moreover, word embedding models either need significant amounts of data, or tuning through transfer learning of a domain-specific vocabulary that is unique to most commercial applications.

    In this talk, I will introduce exponential family embeddings. Developed by Rudolph and Blei, these methods extend the idea of word embeddings to other types of high-dimensional data. I will demonstrate how they can be used to conduct advanced topic modeling on datasets that are medium-sized, which are specialized enough to require significant modifications of a word2vec model and contain more general data types (including categorical, count, continuous). I will discuss how my team implemented a dynamic embedding model using Tensor Flow and our proprietary corpus of job descriptions. Using both categorical and natural language data associated with jobs, we charted the development of different skill sets over the last 3 years. I will specifically focus the description of results on how tech and data science skill sets have developed, grown and pollinated other types of jobs over time.

  • Liked Saurabh Jha
    keyboard_arrow_down

    Saurabh Jha / Rohan Shravan / Usha Rengaraju - Hands on Deep Learning for Computer Vision

    480 Mins
    Workshop
    Intermediate

    Computer Vision has lots of applications including medical imaging, autonomous
    vehicles, industrial inspection and augmented reality. Use of Deep Learning for
    computer Vision can be categorized into multiple categories for both images and
    videos – Classification, detection, segmentation & generation.
    Having worked in Deep Learning with a focus on Computer Vision have come
    across various challenges and learned best practices over a period
    experimenting with cutting edge ideas. This workshop is for Data Scientists &
    Computer Vision Engineers whose focus is deep learning. We will cover state of
    the art architectures for Image Classification, Segmentation and practical tips &
    tricks to train a deep neural network models. It will be hands on session where
    every concepts will be introduced through python code and our choice of deep
    learning framework will be PyTorch v1.0 and Keras.

    Given we have only 8 hours, we will cover the most important fundamentals,
    current techniques and avoid anything which is obsolete or not being used by
    state-of-art algorithms. We will directly start with building the intuition for
    Convolutional Neural Networks, and focus on core architectural problems. We
    will try and answer some of the hard questions like how many layers must be
    there in a network, how many kernels should we add. We will look at the
    architectural journey of some of the best papers and discover what each brought
    into the field of Vision AI, making today’s best networks possible. We will cover 9
    different kinds of Convolutions which will cover a spectrum of problems like
    running DNNs on constrained hardware, super-resolution, image segmentation,
    etc. The concepts would be good enough for all of us to move to harder problems
    like segmentation or super-resolution later, but we will focus on object
    recognition, followed by object detections. We will build our networks step by
    step, learning how optimizations techniques actually improve our networks and
    exactly when should we introduce them. We hope the leave you in confidence
    which will help you read research papers like your second nature. Given we have
    8 hours, and we want the sessions to be productive, we will instead of introducing

    all the problems and solutions, focus on the fundamentals of modern deep neural
    networks.

  • Liked Anant Jain
    keyboard_arrow_down

    Anant Jain - Adversarial Attacks on Neural Networks

    Anant Jain
    Anant Jain
    Co-Founder
    Compose Labs, Inc.
    schedule 6 months ago
    Sold Out!
    20 Mins
    Talk
    Intermediate

    Since 2014, adversarial examples in Deep Neural Networks have come a long way. This talk aims to be a comprehensive introduction to adversarial attacks including various threat models (black box/white box), approaches to create adversarial examples and will include demos. The talk will dive deep into the intuition behind why adversarial examples exhibit the properties they do — in particular, transferability across models and training data, as well as high confidence of incorrect labels. Finally, we will go over various approaches to mitigate these attacks (Adversarial Training, Defensive Distillation, Gradient Masking, etc.) and discuss what seems to have worked best over the past year.