Deep learning powered Genomic Research
The event disease happens when there is a slip in the finely orchestrated dance between physiology, environment and genes. Treatment with chemicals (natural, synthetic or combination) solved some diseases but others persisted and got propagated along the generations. Molecular basis of disease became prime center of studies to understand and to analyze root cause. Cancer also showed a way that origin of disease, detection, prognosis and treatment along with cure was not so uncomplicated process. Treatment of diseases had to be done case by case basis (no one size fits).
With the advent of next generation sequencing, high through put analysis, enhanced computing power and new aspirations with neural network to address this conundrum of complicated genetic elements (structure and function of various genes in our systems). This requires the genomic material extraction, their sequencing (automated system) and analysis to map the strings of As, Ts, Gs, and Cs which yields genomic dataset. These datasets are too large for traditional and applied statistical techniques. Consequently, the important signals are often incredibly small along with blaring technical noise. This further requires far more sophisticated analysis techniques. Artificial intelligence and deep learning gives us the power to draw clinically useful information from the genetic datasets obtained by sequencing.
Precision of these analyses have become vital and way forward for disease detection, its predisposition, empowers medical authorities to make fair and situationally decision about patient treatment strategies. This kind of genomic profiling, prediction and mode of disease management is useful to tailoring FDA approved treatment strategies based on these molecular disease drivers and patient’s molecular makeup.
Now, the present scenario encourages designing, developing, testing of medicine based on existing genetic insights and models. Deep learning models are helping to analyze and interpreting tiny genetic variations ( like SNPs – Single Nucleotide Polymorphisms) which result in unraveling of crucial cellular process like metabolism, DNA wear and tear. These models are also responsible in identifying disease like cancer risk signatures from various body fluids. They have the immense potential to revolutionize healthcare ecosystem. Clinical data collection is not streamlined and done in a haphazard manner and the requirement of data to be amenable to a uniform fetchable and possibility to be combined with genetic information would power the value, interpretation and decisive patient treatment modalities and their outcomes.
There is hugh inflow of medical data from emerging human wearable technologies, along with other health data integrated with ability to do quickly carry out complex analyses on rich genomic databases over the cloud technologies … would revitalize disease fighting capability of humans. Last but still upcoming area of application in direct to consumer genomics (success of 23andMe).
This road map promises an end-to-end system to face disease in its all forms and nature. Medical research, and its applications like gene therapies, gene editing technologies like CRISPR, molecular diagnostics and precision medicine could be revolutionized by tailoring a high-throughput computing method and its application to enhanced genomic datasets.
Outline/Structure of the Workshop
1. Genetic structure and basic building blocks
2. DNA mutation detection on small DNA data
3. Cancer classification model
4. How to interpret DNA information and use it for disease prognosis
- Gain insight into human genomics and healthcare
- Develop an intuitive understanding of sequence models.
- Exciting knowledge about emerging area of research
Anyone who is interested in healthcare and emerging trends in human genetics, Data Scientists, Data Analysts, Machine Learning engineers , Life Sciences / Genomics Researchers, Deep Learning Engineers
Prerequisites for Attendees
We would start right from the basics. There are no specific prerequisites.
However participants having basic python skills and some knowledge about CNN and ANN would help.
schedule Submitted 5 days ago
People who liked this proposal, also liked:
Viral B. Shah - Growing a compiler - Getting to ML from the general-purpose Julia compilerViral B. ShahCo-inventor of JuliaJulia Computing Inc.
schedule 1 month agoSold Out!
Since we originally proposed the need for a first-class language, compiler and ecosystem for machine learning (ML) - a view that is increasingly shared by many, there have been plenty of interesting developments in the field. Not only have the tradeoffs in existing systems, such as TensorFlow and PyTorch, not been resolved, but they are clearer than ever now that both frameworks contain distinct "static graph" and "eager execution" interfaces. Meanwhile, the idea of ML models fundamentally being differentiable algorithms – often called differentiable programming – has caught on.
Where current frameworks fall short, several exciting new projects have sprung up that dispense with graphs entirely, to bring differentiable programming to the mainstream. Myia, by the Theano team, differentiates and compiles a subset of Python to high-performance GPU code. Swift for TensorFlow extends Swift so that compatible functions can be compiled to TensorFlow graphs. And finally, the Flux ecosystem is extending Julia’s compiler with a number of ML-focused tools, including first-class gradients, just-in-time CUDA kernel compilation, automatic batching and support for new hardware such as TPUs.
This talk will demonstrate how Julia is increasingly becoming a natural language for machine learning, the kind of libraries and applications the Julia community is building, the contributions from India (there are many!), and our plans going forward.
Favio Vázquez - Complete Data Science Workflows with Open Source ToolsFavio VázquezSr. Data ScientistRaken Data Group
schedule 5 days agoSold Out!
Cleaning, preparing , transforming, exploring data and modeling it's what we hear all the time about data science, and these steps maybe the most important ones. But that's not the only thing about data science, in this talk you will learn how the combination of Apache Spark, Optimus, the Python ecosystem and Data Operations can form a whole framework for data science that will allow you and your company to go further, and beyond common sense and intuition to solve complex business problems.
Saurabh Jha / Usha Rengaraju - Hands on Deep Learning for Computer Vision – Techniques for Image SegmentationSaurabh JhaDeep Learning ArchitectDellUsha RengarajuData ScientistMysuru Consulting Group
schedule 3 days agoSold Out!
Computer Vision has lots of applications including medical imaging, autonomous vehicles, industrial inspection and augmented reality. Use of Deep Learning for computer Vision can be categorized into multiple categories for both images and videos – Classification, detection, segmentation & generation.
Having worked in Deep Learning with a focus on Computer Vision have come across various challenges and learned best practices over a period experimenting with cutting edge ideas. This workshop is for Data Scientists & Computer Vision Engineers whose focus is deep learning. We will cover state of the art architectures for Image Segmentation and practical tips & tricks to train a deep neural network models. It will be hands on session where every concepts will be introduced through python code and our choice of deep learning framework will be PyTorch v1.0.
The workshop takes a structured approach. First it covers basic techniques in image processing and python for handling images and building Pytorch data loaders. Then we introduce how image segmentation was done in pre CNN era and cover clustering techniques for segmentation. Start with basics of neural networks and introduce Convolutional neural networks and cover advanced architecture – Resnet. Introduce the idea of Fully Convolutional Paper and it’s impact on Semantic Segmentation. Cover latest semantic segmentation architecture with code and basics of scene text understanding in pytorch with how to run carefully designed experiments using callbacks, hooks. Introduce discriminative learning rate and mixed precision to train deep neural network models. Idea is to bridge the gap between theory and practice and teach how to run practical experiments and tune deep learning based systems by covering tricks introduced in various research papers. Discuss in-depth on the interaction between batchnorm, weight decay and learning rate.
Pankaj Kumar / Abinash Panda / Usha Rengaraju - Quantitative Finance :Global macro trading strategy using Probabilistic Graphical ModelsPankaj KumarQuantitative Research AssociateStatestreet Global AdvisorsAbinash PandaCEOProdios LabsUsha RengarajuData ScientistMysuru Consulting Group
schedule 3 days agoSold Out!
Crude oil plays an important role in the macroeconomic stability and it heavily influences the performance of the global financial markets. Unexpected fluctuations in the real price of crude oil are detrimental to the welfare of both oil-importing and oil-exporting economies.Global macro hedge-funds view forecast the price of oil as one of the key variables in generating macroeconomic projections and it also plays an important role for policy makers in predicting recessions.
Probabilistic Graphical Models can help in improving the accuracy of existing quantitative models for crude oil price prediction as it takes in to account many different macroeconomic and geopolitical variables .
Hidden Markov Models are used to detect underlying regimes of the time-series data by discretising the continuous time-series data. In this workshop we use Baum-Welch algorithm for learning the HMMs, and Viterbi Algorithm to find the sequence of hidden states (i.e. the regimes) given the observed states (i.e. monthly differences) of the time-series.
Belief Networks are used to analyse the probability of a regime in the Crude Oil given the evidence as a set of different regimes in the macroeconomic factors . Greedy Hill Climbing algorithm is used to learn the Belief Network, and the parameters are then learned using Bayesian Estimation using a K2 prior. Inference is then performed on the Belief Networks to obtain a forecast of the crude oil markets, and the forecast is tested on real data.
Shrutika Poyrekar / kiran karkera / Usha Rengaraju - Introduction to Bayesian NetworksShrutika PoyrekarData SientistEnvestnet | Yodleekiran karkeraData scientistDex.sgUsha RengarajuData ScientistMysuru Consulting Group
schedule 5 days agoSold Out!
Most machine learning models assume independent and identically distributed (i.i.d) data. Graphical models can capture almost arbitrarily rich dependency structures between variables. They encode conditional independence structure with graphs. Bayesian network, a type of graphical model describes a probability distribution among all variables by putting edges between the variable nodes, wherein edges represent the conditional probability factor in the factorized probability distribution. Thus Bayesian Networks provide a compact representation for dealing with uncertainty using an underlying graphical structure and the probability theory. These models have a variety of applications such as medical diagnosis, biomonitoring, image processing, turbo codes, information retrieval, document classification, gene regulatory networks, etc. amongst many others. These models are interpretable as they are able to capture the causal relationships between different features .They can work efficiently with small data and also deal with missing data which gives it more power than conventional machine learning and deep learning models.
In this session, we will discuss concepts of conditional independence, d- separation , Hammersley Clifford theorem , Bayes theorem, Expectation Maximization and Variable Elimination. There will be a code walk through of simple case study.
Shalini Sinha / Badri Narayanan Gopalakrishnan ,PhD / Usha Rengaraju - Lifitng Up: Deep Learning for effective and efficient implementation of anti-hunger and anti-poverty programs(AI for Social Good)Shalini SinhaDirector- Data ScienceNumerifyBadri Narayanan Gopalakrishnan ,PhDFounder and DirectorInfinite Sum Modelling, Seattle USAUsha RengarajuData ScientistMysuru Consulting Group
schedule 5 days agoSold Out!
Ending poverty and zero hunger are top two goals United Nations aims to achieve by 2030 under its sustainable development program. Hunger and poverty are byproducts of multiple factors and fighting them require multi-fold effort from all stakeholders. Artificial Intelligence and Machine learning has transformed the way we live, work and interact. However economics of business has limited its application to few segments of the society. A much conscious effort is needed to bring the power of AI to the benefits of the ones who actually need it the most – people below the poverty line. Here we present our thoughts on how deep learning and big data analytics can be combined to enable effective implementation of anti-poverty programs. The advancements in deep learning , micro diagnostics combined with effective technology policy is the right recipe for a progressive growth of a nation. Deep learning can help identify poverty zones across the globe based on night time images where the level of light correlates to higher economic growth. Once the areas of lower economic growth are identified, geographic and demographic data can be combined to establish micro level diagnostics of these underdeveloped area. The insights from the data can help plan an effective intervention program. Machine Learning can be further used to identify potential donors, investors and contributors across the globe based on their skill-set, interest, history, ethnicity, purchasing power and their native connect to the location of the proposed program. Adequate resource allocation and efficient design of the program will also not guarantee success of a program unless the project execution is supervised at grass-root level. Data Analytics can be used to monitor project progress, effectiveness and detect anomaly in case of any fraud or mismanagement of funds.
Raunak Bhandari / Ankit Desai / Usha Rengaraju - Knowledge Graph from Natural Language: Incorporating order from textual chaosRaunak BhandariSr. Data ScientistEmbibeAnkit DesaiLead Data ScientistEmbibeUsha RengarajuData ScientistMysuru Consulting Group
schedule 4 days agoSold Out!
What If I told you that instead of the age-old saying that "a picture is worth a thousand words", it could be that "a word is worth a thousand pictures"?
Language evolved as an abstraction of distilled information observed and collected from the environment for sophisticated and efficient interpersonal communication and is responsible for humanity's ability to collaborate by storing and sharing experiences. Words represent evocative abstractions over information encoded in our memory and are a composition of many primitive information types.
That is why language processing is a much more challenging domain and witnessed a delayed 'imagenet' moment.
One of the cornerstone applications of natural language processing is to leverage the language's inherent structural properties to build a knowledge graph of the world.
Knowledge graph is a form of a rich knowledge base which represents information as an interconnected web of entities and their interactions with each other. This naturally manifests as a graph data structure, where nodes represent entities and the relationship between them are the edges.
Automatically constructing and leveraging it in an intelligent system is an AI-hard problem, and an amalgamation of a wide variety of fields like natural language processing, information extraction and retrieval, graph algorithms, deep learning, etc.
It represents a paradigm shift for artificial intelligence systems by going beyond deep learning driven pattern recognition and towards more sophisticated forms of intelligence rooted in reasoning to solve much more complicated tasks.
To elucidate the differences between reasoning and pattern recognition: consider the problem of computer vision: the vision stack processes an image to detect shapes and patterns in order to identify objects - this is pattern recognition, whereas reasoning is much more complex - to associate detected objects with each other in order to meaningfully describe a scene. For this to be accomplished, a system needs to have a rich understanding of the entities within the scene and their relationships with each other.
To understand a scene where a person is drinking a can of cola, a system needs to understand concepts like people, that they drink certain liquids via their mouths, liquids can be placed into metallic containers which can be held within a palm to be consumed, and the generational phenomenon that is cola, among others. A sophisticated vision system can then use this rich understanding to fetch details about cola in-order to alert the user of his calorie intake, or to update preferences for a customer. A Knowledge Graph's 'awareness' of the world phenomenons can thus be used to augment a vision system to facilitate such higher order semantic reasoning.
In production systems though, reasoning may be cast into a pattern recognition problem by limiting the scope of the system for feasibility, but this may be insufficient as the complexity of the system scales or we try to solve general intelligence.
Challenges in building a Knowledge Graph
There are two primary challenges towards integrating knowledge graphs in systems: acquisition of knowledge and construction of the graph and effectively leveraging it with robust algorithms to solve reasoning tasks. Creation of the knowledge graph can vary widely depending on the breadth and complexity of the domain - from just manual curation to automatically constructing it by leveraging unstructured/semi-structured sources of knowledge, like books and Wikipedia.
Many natural language processing tasks are precursors towards building knowledge graphs from unstructured text, like syntactic parsing, information extraction, entity linking, named entity recognition, relationship extraction, semantic parsing, semantic role labeling, entity disambiguation, etc. Open information extraction is an active area of research on extracting semantic triplets of object ('John'), predicate ('eats'), subject ('burger') from plain text, which are used to build the knowledge graph automatically.
A very interesting approach to this problem is the extraction of frame semantics. Frame semantics relates linguistic semantics to encyclopedic knowledge and the basic idea is that the meaning of a word is linked to all essential knowledge that relates to it, for eg. to understand the word "sell", it's necessary to also know about commercial transactions, which involve a seller, buyer, goods, payment, and the relations between these, which can be represented in a knowledge graph.
This workshop will focus on building such a knowledge graph from unstructured text.
Learn good research practices like organizing code and modularizing output for productive data wrangling to improve algorithm performance.
Knowledge Graph at Embibe
We will showcase how Embibe's proprietary Knowledge Graph manifests and how it's leveraged across a multitude of projects in our Data Science Lab.
Saikat Sarkar / Dr Sweta Choudhary / Raunak Bhandari / Srikanth Ramaswamy / Usha Rengaraju - AI meets NeuroscienceSaikat SarkarSr. Consultant Manager - AA & Human Data ScienceIMS HealthDr Sweta ChoudharyHead - Medical Products & ServicesMedwell Ventures (Nightingales)Raunak BhandariSr. Data ScientistEmbibeSrikanth Ramaswamy--Usha RengarajuData ScientistMysuru Consulting Group
schedule 3 days agoSold Out!
This is a mixer workshop with lot of clinicians , medical experts , Neuroimaging experts ,Neuroscientists, data scientists and statisticians will come under one roof to bring together this revolutionary workshop.
The theme will be updated soon .