Building Deep Learning based Healthcare Application using TensorFlow
Machine learning and deep learning have been rapidly adopted in various spheres of medicine such as discovery of drug, disease diagnosis, Genomics, medical imaging and bioinformatics for translating biomedical data into improved human healthcare. Machine learning/deep learning based healthcare applications assist physicians to make faster, cheaper and more accurate diagnosis.
We have successfully developed three deep learning based healthcare applications and are currently working on two more healthcare related projects. In this workshop, we will discuss one healthcare application titled "Deep Learning based Craniofacial Distance Measurement for Facial Reconstructive Surgery" which is developed by us using TensorFlow. Craniofacial distances play important role in providing information related to facial structure. They include measurements of head and face which are to be measured from image. They are used in facial reconstructive surgeries such as cephalometry, treatment planning of various malocclusions, craniofacial anomalies, facial contouring, facial rejuvenation and different forehead surgeries in which reliable and accurate data are very important and cannot be compromised.
Our discussion on healthcare application will include precise problem statement, the major steps involved in the solution (deep learning based face detection & facial landmarking and craniofacial distance measurement), data set, experimental analysis and challenges faced & overcame to achieve this success. Subsequently, we will provide hands-on exposure to implement this healthcare solution using TensorFlow. Finally, we will briefly discuss the possible extensions of our work and the future scope of research in healthcare sector.
Outline/Structure of the Workshop
- Significance of Deep Learning for Healthcare Solutions (10 mins)
- Discussion of Healthcare Application - 'Deep Learning based Craniofacial Distance Measurement for Facial Reconstructive Surgery' (20 mins)
- Craniofacial distances and their application in facial reconstructive surgeries (5 mins)
- Issues in conventional method of measuring craniofacial distances (3 mins)
- Problem statement (2 mins)
- Proposed solution (5 mins)
- Introduction to TensorFlow components and program structure (5 mins)
- Hands-on Healthcare Application (Craniofacial Distance Measurement for Facial Reconstructive Surgery) using TensorFlow (50 mins)
- Getting started with Google Colaboratory (5 mins)
- Practice sample programs using TensorFlow in Google Colaboratory (10 min)
- Explanation of dataset (5 min)
- Justification for Python libraries/packages used (5 min)
- Implementation of healthcare application using pretrained CNN (10 min)
- Implementation of healthcare application using CE_CLM model (10 min)
- Results and discussion (5 mins)
- Future Research Directions (5 mins)
- Q & A (5 mins)
Learning Outcome
After attending this workshop, participants will get an overview of TensorFlow and how to build machine learning/deep learning based application using TensorFlow.
Target Audience
Data Scientists, Machine Learning/Deep Learning Practitioners, Python Programmers, Doctors, Researchers, Students & Faculty Members from sectors such as Engineering and Technology, Medical
Prerequisites for Attendees
-
Familiarity with fundamentals of machine learning and deep learning.
-
Basic knowledge of Python programming.
- Download files from https://drive.google.com/drive/folders/17fvlAkyoJ5cLmd5XJk-dz6A5Eor6zhpr?usp=sharing and upload all these files in the folder named 'ODSC2019' in your Google drive.
OR
Download files from https://github.com/drmayurimehta/Tensorflow and https://drive.google.com/file/d/145g_UsAyk1n8-WXQg9OOaS69q0dbCqub/view?usp=sharing.
Upload all these files in the folder named 'ODSC2019' in your Google drive.
Links
- Conducted several hands-on sessions on TensorFlow in Faculty Development Programme (FDP)/Workshop at various prestigious institutes including...
- Symbiosis Institute of Technology, Pune
- Birla Vishwakarma Mahavidyalaya, Anand
- G. H. Patel Engineering College, Vidyanagar
- A. D. Patel College of Engineering, New V, V. Nagar
- C. G. Patel Institute of Technology, Bardoli
- Sarvajanik College of Engg. & Tech., Surat
- Going to conduct tutorial on similar topic in upcoming IEEE conferences and Short Term Training Programmes (STTPs).
schedule Submitted 4 years ago
People who liked this proposal, also liked:
-
keyboard_arrow_down
Viral B. Shah - Models as Code Differentiable Programming with Julia
45 Mins
Keynote
Intermediate
Since we originally proposed the need for a first-class language, compiler and ecosystem for machine learning (ML) - a view that is increasingly shared by many, there have been plenty of interesting developments in the field. Not only have the tradeoffs in existing systems, such as TensorFlow and PyTorch, not been resolved, but they are clearer than ever now that both frameworks contain distinct "static graph" and "eager execution" interfaces. Meanwhile, the idea of ML models fundamentally being differentiable algorithms – often called differentiable programming – has caught on.
Where current frameworks fall short, several exciting new projects have sprung up that dispense with graphs entirely, to bring differentiable programming to the mainstream. Myia, by the Theano team, differentiates and compiles a subset of Python to high-performance GPU code. Swift for TensorFlow extends Swift so that compatible functions can be compiled to TensorFlow graphs. And finally, the Flux ecosystem is extending Julia’s compiler with a number of ML-focused tools, including first-class gradients, just-in-time CUDA kernel compilation, automatic batching and support for new hardware such as TPUs.
This talk will demonstrate how Julia is increasingly becoming a natural language for machine learning, the kind of libraries and applications the Julia community is building, the contributions from India (there are many!), and our plans going forward.
-
keyboard_arrow_down
Dr. C.S.Jyothirmayee / Usha Rengaraju / Vijayalakshmi Mahadevan - Deep learning powered Genomic Research
Dr. C.S.JyothirmayeeSr. ScientistNovozymes South Asia Pvt LtdUsha RengarajuPrincipal Data ScientistMysuru Consulting GroupVijayalakshmi MahadevanFaculty ScientistInstitute of Bioinformatics and Applied Biotechnology (IBAB)schedule 4 years ago
90 Mins
Workshop
Advanced
The event disease happens when there is a slip in the finely orchestrated dance between physiology, environment and genes. Treatment with chemicals (natural, synthetic or combination) solved some diseases but others persisted and got propagated along the generations. Molecular basis of disease became prime center of studies to understand and to analyze root cause. Cancer also showed a way that origin of disease, detection, prognosis and treatment along with cure was not so uncomplicated process. Treatment of diseases had to be done case by case basis (no one size fits).
With the advent of next generation sequencing, high through put analysis, enhanced computing power and new aspirations with neural network to address this conundrum of complicated genetic elements (structure and function of various genes in our systems). This requires the genomic material extraction, their sequencing (automated system) and analysis to map the strings of As, Ts, Gs, and Cs which yields genomic dataset. These datasets are too large for traditional and applied statistical techniques. Consequently, the important signals are often incredibly small along with blaring technical noise. This further requires far more sophisticated analysis techniques. Artificial intelligence and deep learning gives us the power to draw clinically useful information from the genetic datasets obtained by sequencing.
Precision of these analyses have become vital and way forward for disease detection, its predisposition, empowers medical authorities to make fair and situationally decision about patient treatment strategies. This kind of genomic profiling, prediction and mode of disease management is useful to tailoring FDA approved treatment strategies based on these molecular disease drivers and patient’s molecular makeup.
Now, the present scenario encourages designing, developing, testing of medicine based on existing genetic insights and models. Deep learning models are helping to analyze and interpreting tiny genetic variations ( like SNPs – Single Nucleotide Polymorphisms) which result in unraveling of crucial cellular process like metabolism, DNA wear and tear. These models are also responsible in identifying disease like cancer risk signatures from various body fluids. They have the immense potential to revolutionize healthcare ecosystem. Clinical data collection is not streamlined and done in a haphazard manner and the requirement of data to be amenable to a uniform fetchable and possibility to be combined with genetic information would power the value, interpretation and decisive patient treatment modalities and their outcomes.
There is hugh inflow of medical data from emerging human wearable technologies, along with other health data integrated with ability to do quickly carry out complex analyses on rich genomic databases over the cloud technologies … would revitalize disease fighting capability of humans. Last but still upcoming area of application in direct to consumer genomics (success of 23andMe).
This road map promises an end-to-end system to face disease in its all forms and nature. Medical research, and its applications like gene therapies, gene editing technologies like CRISPR, molecular diagnostics and precision medicine could be revolutionized by tailoring a high-throughput computing method and its application to enhanced genomic datasets.
-
keyboard_arrow_down
Johnu George / Ramdoot Kumar P - A Scalable Hyperparameter Optimization framework for ML workloads
Johnu GeorgeTechnical LeadCisco SystemsRamdoot Kumar PTECHNICAL LEADCisco Systemsschedule 4 years ago
20 Mins
Demonstration
Intermediate
In machine learning, hyperparameters are parameters that governs the training process itself. For example, learning rate, number of hidden layers, number of nodes per layer are typical hyperparameters for neural networks. Hyperparameter Tuning is the process of searching the best hyper parameters to initialize the learning algorithm, thus improving training performance.
We present Katib, a scalable and general hyper parameter tuning framework based on Kubernetes which is ML framework agnostic (Tensorflow, Pytorch, MXNet, XGboost etc). You will learn about Katib in Kubeflow, an open source ML toolkit for Kubernetes, as we demonstrate the advantages of hyperparameter optimization by running a sample classification problem. In addition, as we dive into the implementation details, you will learn how to contribute as we expand this platform to include autoML tools.
-
keyboard_arrow_down
Favio Vázquez - Complete Data Science Workflows with Open Source Tools
90 Mins
Tutorial
Beginner
Cleaning, preparing , transforming, exploring data and modeling it's what we hear all the time about data science, and these steps maybe the most important ones. But that's not the only thing about data science, in this talk you will learn how the combination of Apache Spark, Optimus, the Python ecosystem and Data Operations can form a whole framework for data science that will allow you and your company to go further, and beyond common sense and intuition to solve complex business problems.
-
keyboard_arrow_down
Anupam Purwar - Prediction of Wilful Default using Machine Learning
45 Mins
Case Study
Intermediate
Banks and financial institutes in India over the last few years have increasingly faced defaults by corporates. In fact, NBFC stocks have suffered huge losses in recent times. It has triggered a contagion which spilled over to other financial stocks too and adversely affected benchmark indices resulting in short term bearishness. This makes it imperative to investigate ways to prevent rather than cure such situations. However, the banks face a twin-faced challenge in terms of identifying the probable wilful defaulters from the rest and moral hazard among the bank employees who are many a time found to be acting on behest of promoters of defaulting firms. The first challenge is aggravated by the fact that due diligence of firms before the extension of loan is a time-consuming process and the second challenge hints at the need for placement of automated safeguards to reduce mal-practises originating out of the human behaviour. To address these challenges, the automation of loan sanctioning process is a possible solution. Hence, we identified important firmographic variables viz. financial ratios and their historic patterns by looking at the firms listed as dirty dozen by Reserve Bank of India. Next, we used k-means clustering to segment these firms and label them into various categories viz. normal, distressed defaulter and wilful defaulter. Besides, we utilized text and sentiment analysis to analyze the annual reports of all BSE and NSE listed firms over the last 10 years. From this, we identified word tags which resonate well with the occurrence of default and are indicators of financial performance of these firms. A rigorous analysis of these word tags (anagrams, bi-grams and co-located words) over a period of 10 years for more than 100 firms indicate the existence of a relation between frequency of word tags and firm default. Lift estimation of firmographic financial ratios namely Altman Z score and frequency of word tags for the first time uncovers the importance of text analysis in predicting financial performance of firms and their default. Our investigation also reveals the possibility of using neural networks as a predictor of firm default. Interestingly, the neural network developed by us utilizes the power of open source machine learning libraries and throws open possibilities of deploying such a neural network model by banks with a small one-time investment. In short, our work demonstrates the ability of machine learning in addressing challenges related to prevention of wilful default. We envisage that the implementation of neural network based prediction models and text analysis of firm-specific financial reports could help financial industry save millions in recovery and restructuring of loans.
-
keyboard_arrow_down
Dr. Mayuri Mehta - Demonstration of Deep Learning based Healthcare Applications
Dr. Mayuri MehtaProfessor & PG In-ChargeDepartment of Computer Engineering, Sarvajanik College of Engineering and Technologyschedule 4 years ago
45 Mins
Demonstration
Intermediate
Recent advancements in AI are proving beneficial in development of applications in various spheres of healthcare sector such as microbiological analysis, discovery of drug, disease diagnosis, Genomics, medical imaging and bioinformatics for translating a large-scale data into improved human healthcare. Automation in healthcare using machine learning/deep learning assists physicians to make faster, cheaper and more accurate diagnoses.
Due to increasing availability of electronic healthcare data (structured as well as unstructured data) and rapid progress of analytics techniques, a lot of research is being carried out in this area. Popular AI techniques include machine learning/deep learning for structured data and natural language processing for unstructured data. Guided by relevant clinical questions, powerful deep learning techniques can unlock clinically relevant information hidden in the massive amount of data, which in turn can assist clinical decision making.
We have successfully developed three deep learning based healthcare applications using TensorFlow and are currently working on three more healthcare related projects. In this demonstration session, first we shall briefly discuss the significance of deep learning for healthcare solutions. Next, we will demonstrate two deep learning based healthcare applications developed by us. The discussion of each application will include precise problem statement, proposed solution, data collected & used, experimental analysis and challenges encountered & overcame to achieve this success. Finally, we will briefly discuss the other applications on which we are currently working and the future scope of research in this area.
-
keyboard_arrow_down
Anupam Purwar - An Industrial IoT system for wireless instrumentation: Development, Prototyping and Testing
45 Mins
Talk
Intermediate
The next generation machinery viz. turbines, aircraft and boilers will rely heavily on smart data acquisition and monitoring to meet their performance and reliability requirements. These systems require the accurate acquisition of various parameters like pressure, temperature and heat flux in real time for structural health monitoring, automation and intelligent control. This calls for the use of sophisticated instrumentation to measure these parameters and transmit them in real time. In the present work, a wireless sensor network (WSN) based on a novel high-temperature thermocouple cum heat flux sensor has been proposed. The architecture of this WSN has been evolved keeping in mind its robustness, safety and affordability. WiFi communication protocol based on IEEE 802.11 b/g/n specification has been utilized to create a secure and low power WSN. The thermocouple cum heat flux sensor and instrumentation enclosure have been designed using rigorous finite element modelling. The sensor and wireless transmission unit have been housed in an enclosure capable of withstanding temperature and pressure in the range of 100 bars and 2500K respectively. The sensor signal is conditioned before being passed to the wireless ESP8266 based ESP12E transmitter, which transmits data to a web server. This system uploads the data to a cloud database in real time. Thus, providing seamless data availability to decision maker sitting across the globe without any time lag and with ultra-low power consumption. The real-time data is envisaged to be used for structural health monitoring of hot structures by identifying patterns of temperature rise which have historically resulted in damage using Machine learning (ML). Such type of ML application can save millions of dollars wasted in the replacement and maintenance of industrial equipment by alerting the engineers in real time.
-
keyboard_arrow_down
Maryam Jahanshahi - Applying Dynamic Embeddings in Natural Language Processing to Analyze Text over Time
45 Mins
Case Study
Intermediate
Many data scientists are familiar with word embedding models such as word2vec, which capture semantic similarity of words in a large corpus. However, word embeddings are limited in their ability to interrogate a corpus alongside other context or over time. Moreover, word embedding models either need significant amounts of data, or tuning through transfer learning of a domain-specific vocabulary that is unique to most commercial applications.
In this talk, I will introduce exponential family embeddings. Developed by Rudolph and Blei, these methods extend the idea of word embeddings to other types of high-dimensional data. I will demonstrate how they can be used to conduct advanced topic modeling on datasets that are medium-sized, which are specialized enough to require significant modifications of a word2vec model and contain more general data types (including categorical, count, continuous). I will discuss how my team implemented a dynamic embedding model using Tensor Flow and our proprietary corpus of job descriptions. Using both categorical and natural language data associated with jobs, we charted the development of different skill sets over the last 3 years. I will specifically focus the description of results on how tech and data science skill sets have developed, grown and pollinated other types of jobs over time.
-
keyboard_arrow_down
Saurabh Jha / Rohan Shravan / Usha Rengaraju - Hands on Deep Learning for Computer Vision
Saurabh JhaDeep Learning ArchitectDellRohan ShravanUsha RengarajuPrincipal Data ScientistMysuru Consulting Groupschedule 4 years ago
480 Mins
Workshop
Intermediate
Computer Vision has lots of applications including medical imaging, autonomous
vehicles, industrial inspection and augmented reality. Use of Deep Learning for
computer Vision can be categorized into multiple categories for both images and
videos – Classification, detection, segmentation & generation.
Having worked in Deep Learning with a focus on Computer Vision have come
across various challenges and learned best practices over a period
experimenting with cutting edge ideas. This workshop is for Data Scientists &
Computer Vision Engineers whose focus is deep learning. We will cover state of
the art architectures for Image Classification, Segmentation and practical tips &
tricks to train a deep neural network models. It will be hands on session where
every concepts will be introduced through python code and our choice of deep
learning framework will be PyTorch v1.0 and Keras.Given we have only 8 hours, we will cover the most important fundamentals,
current techniques and avoid anything which is obsolete or not being used by
state-of-art algorithms. We will directly start with building the intuition for
Convolutional Neural Networks, and focus on core architectural problems. We
will try and answer some of the hard questions like how many layers must be
there in a network, how many kernels should we add. We will look at the
architectural journey of some of the best papers and discover what each brought
into the field of Vision AI, making today’s best networks possible. We will cover 9
different kinds of Convolutions which will cover a spectrum of problems like
running DNNs on constrained hardware, super-resolution, image segmentation,
etc. The concepts would be good enough for all of us to move to harder problems
like segmentation or super-resolution later, but we will focus on object
recognition, followed by object detections. We will build our networks step by
step, learning how optimizations techniques actually improve our networks and
exactly when should we introduce them. We hope the leave you in confidence
which will help you read research papers like your second nature. Given we have
8 hours, and we want the sessions to be productive, we will instead of introducingall the problems and solutions, focus on the fundamentals of modern deep neural
networks. -
keyboard_arrow_down
Anant Jain - Adversarial Attacks on Neural Networks
20 Mins
Talk
Intermediate
Since 2014, adversarial examples in Deep Neural Networks have come a long way. This talk aims to be a comprehensive introduction to adversarial attacks including various threat models (black box/white box), approaches to create adversarial examples and will include demos. The talk will dive deep into the intuition behind why adversarial examples exhibit the properties they do — in particular, transferability across models and training data, as well as high confidence of incorrect labels. Finally, we will go over various approaches to mitigate these attacks (Adversarial Training, Defensive Distillation, Gradient Masking, etc.) and discuss what seems to have worked best over the past year.