Minimizing CPU utilization for deep networks

schedule Aug 8th 05:30 - 06:15 PM place Jupiter people 89 Interested

The advent of machine learning along with its integration with computer vision has enabled users to efficiently to develop image-based solutions for innumerable use cases. A machine learning model consists of an algorithm which draws some meaningful correlation between the data without being tightly coupled to a specific set of rules. It's crucial to explain the subtle nuances of the network along with the use-case we are trying to solve. With the advent of technology, the quality of the images has increased which in turn has increased the need for resources to process the images for building a model. The main question, however, is to discuss the need to develop lightweight models keeping the performance of the system intact.
To connect the dots, we will talk about the development of these applications specifically aimed to provide equally accurate results without using much of the resources. This is achieved by using image processing techniques along with optimizing the network architecture.
These applications will range from recognizing digits, alphabets which the user can 'draw' at runtime; developing state of the art facial recognition system; predicting hand emojis, developing a self-driving system, detecting Malaria and brain tumor, along with Google's project of 'Quick, Draw' of hand doodles.
In this presentation, we will discuss the development of such applications with minimization of CPU usage.

 
 

Outline/Structure of the Demonstration

The presentation will have code excerpts for the pre-processing and computer vision part for filtering out the unwanted background from the data. Each excerpt will be followed by a demo of how the changes work in real-time.

For instance, I will be taking up a research paper by NVIDIA on behavioral cloning for self-driving cars. We can reduce the number of trainable parameters of the model proposed in the paper by 50% if we use an optimized CNN model, thus saving on training and prediction time.

First, we will start with formulating and addressing a strong problem statement followed by a thorough literature review. Once these things are taken care of, we will discuss the data gathering part, followed by the algorithm evaluation and future scope.

While giving each of the demos, I would be talking about the models and algorithms used. Why is the literature review the most important phase of your project? How contributing to the community helps you ultimately.

So for demos, I am planning for

- MNIST

- Autopilot (NVIDIA)

- Emojinator

- Malaria Detection

- Quick, Draw (Google)

So to pinpoint the techniques for minimization of CPU resources, I am going to discuss

  1. Normalization of data (How and why)
  2. Stripping channels from the images. Instead of all the 3 color channels, can use only 1 or use them separately to train the model.
  3. Rescaling/augmentation of the data.
  4. Designing filters to filter out the object/region of interest and removing the excessive background noise.
  5. Using fit_generator capability of tensorflow. Instead of loading the entire dataset at once which might crash the RAM. We can use multiprocessing in loading data batch-wise at runtime.

Apart from these findings, I would also require 10-12 minutes to brief about my work at Centre for Education, Research & Development for Deaf & Dumb in Pune for the construction of Indian Sign Language Recognition application for aiding the progress of kids at the school.

About the NVIDIA research paper, the total trainable parameters (as per the model described in the research paper) are 132,501. However, with my implementation, we only need to train 80,213 parameters and the accuracy can be seen in the video I shared (don't have the exact time taken but I will include those details in the talk).

For normalization, the unnormalized data takes 371us/step (accuracy - 22%), however, the normalized data takes 323us/step(accuracy -73%).

Learning Outcome

By the end of the session, the audience will have a clearer understanding of building vision based optimized models that can be run on low resources.

The emphasis would also be on contributing to the data science opensource community.

Update : I presented similar work at ODSC Boston, 2019. I have updated the content according to the feedback

Target Audience

Machine Learning enthusiasts as well as virtuosos.

Prerequisites for Attendees

Basic understanding of neural networks and image processing would be highly appreciated.

schedule Submitted 7 months ago

Public Feedback

comment Suggest improvements to the Speaker
  • Kuldeep Jiwani
    By Kuldeep Jiwani  ~  5 months ago
    reply Reply

    Hi Akshay,

    You have listed a very important topic of minimising CPU utilisation for deep networks.

    I would like to re-iterate over what Anoop asked earlier is that how are you minimising the CPU utilisation, this is not coming out clearly from the description. You have mentioned about the paper by NVIDIA on behavioural cloning, but where are the details for it and will you covering the content of the research paper in the talk ?

    Moreover the notebook that you have attached also doesn't elaborate much on it, I can see the accuracy of the model going up by normalising data but where is the CPU minimisation stuff. Please elaborate more on your main topic.

    • Akshay Bahadur
      By Akshay Bahadur  ~  5 months ago
      reply Reply

      Hi Kuldeep,

      So to pinpoint the techniques for minimization of CPU resources, I am going to discuss

      1. Normalization of data (How and why)
      2. Stripping channels from the images. Instead of all the 3 color channels, can use only 1 or use them separately to train the model.
      3. Rescaling/augmentation of the data.
      4. Designing filters to filter out the object/region of interest and removing the excessive background noise.
      5. Using fit_generator capability of tensorflow. Instead of loading the entire dataset at once which might crash the RAM. We can use multiprocessing in loading data batch-wise at runtime.

      Apart from these findings, I would also require 10-12 minutes to brief about my work at Centre for Education, Research & Development for Deaf & Dumb in Pune for development of Indian Sign Language Recognition application for aiding the development of kids at the school.

      About the NVIDIA research paper, the total trainable parameters (as per the model described in the research paper) are 132,501. However, with my implementation, we only need to train 80,213 parameters and the accuracy can be seen in the video I shared (don't have the exact time taken but I will include those details in the talk).

      For normalization, the unnormalized data takes 371us/step (accuracy - 22%), however, the normalized data takes 323us/step(accuracy -73%).

      I hope these details will help make my talk more effective. I am going to provide specific details on resource usage as well the ram and GPU usage as well during the talk.

      • Kuldeep Jiwani
        By Kuldeep Jiwani  ~  5 months ago
        reply Reply

        Hi Akshay,

        This looks perfect, thanks for providing the details.

        It would be better if you also add the above details in your proposal, so that anyone landing on your page gets a sense of the depth you are covering.

        If the proposal is finally accepted by the program committee then someone will get back to you on your 10 - 12 minutes request.

        • Akshay Bahadur
          By Akshay Bahadur  ~  5 months ago
          reply Reply

          Thanks Kuldeep.
          Let me add the details to the proposal.

  • Anoop Kulkarni
    By Anoop Kulkarni  ~  6 months ago
    reply Reply

    Akshay, Thanks for your proposal. Does sound like an interesting idea of doing vision on low resource / CPU systems. However your description primarily talks about vision techniques that have existed for decades now. And also, the description does not focus on your main pitch - which is using CPU type resources to carry out these tasks.

    Could you pleae provide a time roadmap for the talk and capture therein how you intend to discuss each step for CPU type systems? Also, it would help if you could elaborate exactly which demos you plan to include and what is the scope of each one?

    Thanks

    • Akshay Bahadur
      By Akshay Bahadur  ~  5 months ago
      reply Reply

      Hi Anoop,

      Thanks for your insight.

      I believe that I did not elaborate enough. Computer vision has existed for decades but my proposal is to effectively use it to train your model more efficiently using them. 

      For instance, I will be taking up a research paper by NVIDIA on behavioural cloning for self driving cars. We can reduce the number of trainable paramters of the model proposed in the paper by 50% if we use an optimized CNN model, thus saving on training and prediction time.

      This is achieved through a combination of vision techniques, hyperparamter tuning and planning the model architecture.

      I have presented similar work at ODSC Boston, 2019.

      You can find the link to that proposal here : https://odsc.com/webinar-calendar#march

      Link to the resource : https://nbviewer.jupyter.org/github/akshaybahadur21/ODSC-Boston-2019/blob/master/DeepVision.ipynb

       

      So for demos, I am planning for

      - MNIST

      - Autopilot (NVIDIA)

      - Emojinator

      - Malaria Detection

      - Quick, Draw (Google)

       

      Let me know if I you need me to provide clarity on other aspects.


  • Liked Dipanjan Sarkar
    keyboard_arrow_down

    Dipanjan Sarkar - Explainable Artificial Intelligence - Demystifying the Hype

    Dipanjan Sarkar
    Dipanjan Sarkar
    Data Scientist
    Red Hat
    schedule 8 months ago
    Sold Out!
    45 Mins
    Tutorial
    Intermediate

    The field of Artificial Intelligence powered by Machine Learning and Deep Learning has gone through some phenomenal changes over the last decade. Starting off as just a pure academic and research-oriented domain, we have seen widespread industry adoption across diverse domains including retail, technology, healthcare, science and many more. More than often, the standard toolbox of machine learning, statistical or deep learning models remain the same. New models do come into existence like Capsule Networks, but industry adoption of the same usually takes several years. Hence, in the industry, the main focus of data science or machine learning is more ‘applied’ rather than theoretical and effective application of these models on the right data to solve complex real-world problems is of paramount importance.

    A machine learning or deep learning model by itself consists of an algorithm which tries to learn latent patterns and relationships from data without hard-coding fixed rules. Hence, explaining how a model works to the business always poses its own set of challenges. There are some domains in the industry especially in the world of finance like insurance or banking where data scientists often end up having to use more traditional machine learning models (linear or tree-based). The reason being that model interpretability is very important for the business to explain each and every decision being taken by the model.However, this often leads to a sacrifice in performance. This is where complex models like ensembles and neural networks typically give us better and more accurate performance (since true relationships are rarely linear in nature).We, however, end up being unable to have proper interpretations for model decisions.

    To address and talk about these gaps, I will take a conceptual yet hands-on approach where we will explore some of these challenges in-depth about explainable artificial intelligence (XAI) and human interpretable machine learning and even showcase with some examples using state-of-the-art model interpretation frameworks in Python!

  • Liked Dr. Vikas Agrawal
    keyboard_arrow_down

    Dr. Vikas Agrawal - Non-Stationary Time Series: Finding Relationships Between Changing Processes for Enterprise Prescriptive Systems

    45 Mins
    Talk
    Intermediate

    It is too tedious to keep on asking questions, seek explanations or set thresholds for trends or anomalies. Why not find problems before they happen, find explanations for the glitches and suggest shortest paths to fixing them? Businesses are always changing along with their competitive environment and processes. No static model can handle that. Using dynamic models that find time-delayed interactions between multiple time series, we need to make proactive forecasts of anomalous trends of risks and opportunities in operations, sales, revenue and personnel, based on multiple factors influencing each other over time. We need to know how to set what is “normal” and determine when the business processes from six months ago do not apply any more, or only applies to 35% of the cases today, while explaining the causes of risk and sources of opportunity, their relative directions and magnitude, in the context of the decision-making and transactional applications, using state-of-the-art techniques.

    Real world processes and businesses keeps changing, with one moving part changing another over time. Can we capture these changing relationships? Can we use multiple variables to find risks on key interesting ones? We will take a fun journey culminating in the most recent developments in the field. What methods work well and which break? What can we use in practice?

    For instance, we can show a CEO that they would miss their revenue target by over 6% for the quarter, and tell us why i.e. in what ways has their business changed over the last year. Then we provide the prioritized ordered lists of quickest, cheapest and least risky paths to help turn them over the tide, with estimates of relative costs and expected probability of success.

  • Liked Dr. C.S.Jyothirmayee
    keyboard_arrow_down

    Dr. C.S.Jyothirmayee / Usha Rengaraju / Vijayalakshmi Mahadevan - Deep learning powered Genomic Research

    90 Mins
    Workshop
    Advanced

    The event disease happens when there is a slip in the finely orchestrated dance between physiology, environment and genes. Treatment with chemicals (natural, synthetic or combination) solved some diseases but others persisted and got propagated along the generations. Molecular basis of disease became prime center of studies to understand and to analyze root cause. Cancer also showed a way that origin of disease, detection, prognosis and treatment along with cure was not so uncomplicated process. Treatment of diseases had to be done case by case basis (no one size fits).

    With the advent of next generation sequencing, high through put analysis, enhanced computing power and new aspirations with neural network to address this conundrum of complicated genetic elements (structure and function of various genes in our systems). This requires the genomic material extraction, their sequencing (automated system) and analysis to map the strings of As, Ts, Gs, and Cs which yields genomic dataset. These datasets are too large for traditional and applied statistical techniques. Consequently, the important signals are often incredibly small along with blaring technical noise. This further requires far more sophisticated analysis techniques. Artificial intelligence and deep learning gives us the power to draw clinically useful information from the genetic datasets obtained by sequencing.

    Precision of these analyses have become vital and way forward for disease detection, its predisposition, empowers medical authorities to make fair and situationally decision about patient treatment strategies. This kind of genomic profiling, prediction and mode of disease management is useful to tailoring FDA approved treatment strategies based on these molecular disease drivers and patient’s molecular makeup.

    Now, the present scenario encourages designing, developing, testing of medicine based on existing genetic insights and models. Deep learning models are helping to analyze and interpreting tiny genetic variations ( like SNPs – Single Nucleotide Polymorphisms) which result in unraveling of crucial cellular process like metabolism, DNA wear and tear. These models are also responsible in identifying disease like cancer risk signatures from various body fluids. They have the immense potential to revolutionize healthcare ecosystem. Clinical data collection is not streamlined and done in a haphazard manner and the requirement of data to be amenable to a uniform fetchable and possibility to be combined with genetic information would power the value, interpretation and decisive patient treatment modalities and their outcomes.

    There is hugh inflow of medical data from emerging human wearable technologies, along with other health data integrated with ability to do quickly carry out complex analyses on rich genomic databases over the cloud technologies … would revitalize disease fighting capability of humans. Last but still upcoming area of application in direct to consumer genomics (success of 23andMe).

    This road map promises an end-to-end system to face disease in its all forms and nature. Medical research, and its applications like gene therapies, gene editing technologies like CRISPR, molecular diagnostics and precision medicine could be revolutionized by tailoring a high-throughput computing method and its application to enhanced genomic datasets.

  • Liked Badri Narayanan Gopalakrishnan
    keyboard_arrow_down

    Badri Narayanan Gopalakrishnan / Shalini Sinha / Usha Rengaraju - Lifting Up: How AI and Big data can contribute to anti-poverty programs

    45 Mins
    Case Study
    Intermediate

    Ending poverty and zero hunger are top two goals United Nations aims to achieve by 2030 under its sustainable development program. Hunger and poverty are byproducts of multiple factors and fighting them require multi-fold effort from all stakeholders. Artificial Intelligence and Machine learning has transformed the way we live, work and interact. However economics of business has limited its application to few segments of the society. A much conscious effort is needed to bring the power of AI to the benefits of the ones who actually need it the most – people below the poverty line. Here we present our thoughts on how deep learning and big data analytics can be combined to enable effective implementation of anti-poverty programs. The advancements in deep learning , micro diagnostics combined with effective technology policy is the right recipe for a progressive growth of a nation. Deep learning can help identify poverty zones across the globe based on night time images where the level of light correlates to higher economic growth. Once the areas of lower economic growth are identified, geographic and demographic data can be combined to establish micro level diagnostics of these underdeveloped area. The insights from the data can help plan an effective intervention program. Machine Learning can be further used to identify potential donors, investors and contributors across the globe based on their skill-set, interest, history, ethnicity, purchasing power and their native connect to the location of the proposed program. Adequate resource allocation and efficient design of the program will also not guarantee success of a program unless the project execution is supervised at grass-root level. Data Analytics can be used to monitor project progress, effectiveness and detect anomaly in case of any fraud or mismanagement of funds.

  • Liked Pankaj Kumar
    keyboard_arrow_down

    Pankaj Kumar / Abinash Panda / Usha Rengaraju - Quantitative Finance :Global macro trading strategy using Probabilistic Graphical Models

    90 Mins
    Workshop
    Advanced

    { This is a handson workshop in pgmpy package. The creator of pgmpy package Abinash Panda will do the code demonstration }

    Crude oil plays an important role in the macroeconomic stability and it heavily influences the performance of the global financial markets. Unexpected fluctuations in the real price of crude oil are detrimental to the welfare of both oil-importing and oil-exporting economies.Global macro hedge-funds view forecast the price of oil as one of the key variables in generating macroeconomic projections and it also plays an important role for policy makers in predicting recessions.

    Probabilistic Graphical Models can help in improving the accuracy of existing quantitative models for crude oil price prediction as it takes in to account many different macroeconomic and geopolitical variables .

    Hidden Markov Models are used to detect underlying regimes of the time-series data by discretising the continuous time-series data. In this workshop we use Baum-Welch algorithm for learning the HMMs, and Viterbi Algorithm to find the sequence of hidden states (i.e. the regimes) given the observed states (i.e. monthly differences) of the time-series.

    Belief Networks are used to analyse the probability of a regime in the Crude Oil given the evidence as a set of different regimes in the macroeconomic factors . Greedy Hill Climbing algorithm is used to learn the Belief Network, and the parameters are then learned using Bayesian Estimation using a K2 prior. Inference is then performed on the Belief Networks to obtain a forecast of the crude oil markets, and the forecast is tested on real data.

  • Liked Raunak Bhandari
    keyboard_arrow_down

    Raunak Bhandari / Ankit Desai / Usha Rengaraju - Knowledge Graph from Natural Language: Incorporating order from textual chaos

    90 Mins
    Workshop
    Advanced

    Intro

    What If I told you that instead of the age-old saying that "a picture is worth a thousand words", it could be that "a word is worth a thousand pictures"?

    Language evolved as an abstraction of distilled information observed and collected from the environment for sophisticated and efficient interpersonal communication and is responsible for humanity's ability to collaborate by storing and sharing experiences. Words represent evocative abstractions over information encoded in our memory and are a composition of many primitive information types.

    That is why language processing is a much more challenging domain and witnessed a delayed 'imagenet' moment.

    One of the cornerstone applications of natural language processing is to leverage the language's inherent structural properties to build a knowledge graph of the world.

    Knowledge Graphs

    Knowledge graph is a form of a rich knowledge base which represents information as an interconnected web of entities and their interactions with each other. This naturally manifests as a graph data structure, where nodes represent entities and the relationship between them are the edges.

    Automatically constructing and leveraging it in an intelligent system is an AI-hard problem, and an amalgamation of a wide variety of fields like natural language processing, information extraction and retrieval, graph algorithms, deep learning, etc.

    It represents a paradigm shift for artificial intelligence systems by going beyond deep learning driven pattern recognition and towards more sophisticated forms of intelligence rooted in reasoning to solve much more complicated tasks.

    To elucidate the differences between reasoning and pattern recognition: consider the problem of computer vision: the vision stack processes an image to detect shapes and patterns in order to identify objects - this is pattern recognition, whereas reasoning is much more complex - to associate detected objects with each other in order to meaningfully describe a scene. For this to be accomplished, a system needs to have a rich understanding of the entities within the scene and their relationships with each other.

    To understand a scene where a person is drinking a can of cola, a system needs to understand concepts like people, that they drink certain liquids via their mouths, liquids can be placed into metallic containers which can be held within a palm to be consumed, and the generational phenomenon that is cola, among others. A sophisticated vision system can then use this rich understanding to fetch details about cola in-order to alert the user of their calorie intake, or to update preferences for a customer. A Knowledge Graph's 'awareness' of the world phenomenons can thus be used to augment a vision system to facilitate such higher order semantic reasoning.

    In production systems though, reasoning may be cast into a pattern recognition problem by limiting the scope of the system for feasibility, but this may be insufficient as the complexity of the system scales or we try to solve general intelligence.

    Challenges in building a Knowledge Graph

    There are two primary challenges towards integrating knowledge graphs in systems: acquisition of knowledge and construction of the graph and effectively leveraging it with robust algorithms to solve reasoning tasks. Creation of the knowledge graph can vary widely depending on the breadth and complexity of the domain - from just manual curation to automatically constructing it by leveraging unstructured/semi-structured sources of knowledge, like books and Wikipedia.

    Many natural language processing tasks are precursors towards building knowledge graphs from unstructured text, like syntactic parsing, information extraction, entity linking, named entity recognition, relationship extraction, semantic parsing, semantic role labeling, entity disambiguation, etc. Open information extraction is an active area of research on extracting semantic triplets of object ('John'), predicate ('eats'), subject ('burger') from plain text, which are used to build the knowledge graph automatically.

    A very interesting approach to this problem is the extraction of frame semantics. Frame semantics relates linguistic semantics to encyclopedic knowledge and the basic idea is that the meaning of a word is linked to all essential knowledge that relates to it, for eg. to understand the word "sell", it's necessary to also know about commercial transactions, which involve a seller, buyer, goods, payment, and the relations between these, which can be represented in a knowledge graph.

    This workshop will focus on building such a knowledge graph from unstructured text.

    Learn good research practices like organizing code and modularizing output for productive data wrangling to improve algorithm performance.

    Knowledge Graph at Embibe

    We will showcase how Embibe's proprietary Knowledge Graph manifests and how it's leveraged across a multitude of projects in our Data Science Lab.

  • Liked Shrutika Poyrekar
    keyboard_arrow_down

    Shrutika Poyrekar / kiran karkera / Usha Rengaraju - Introduction to Bayesian Networks

    90 Mins
    Workshop
    Advanced

    { This is a handson workshop . The use case is Traffic analysis . }

    Most machine learning models assume independent and identically distributed (i.i.d) data. Graphical models can capture almost arbitrarily rich dependency structures between variables. They encode conditional independence structure with graphs. Bayesian network, a type of graphical model describes a probability distribution among all variables by putting edges between the variable nodes, wherein edges represent the conditional probability factor in the factorized probability distribution. Thus Bayesian Networks provide a compact representation for dealing with uncertainty using an underlying graphical structure and the probability theory. These models have a variety of applications such as medical diagnosis, biomonitoring, image processing, turbo codes, information retrieval, document classification, gene regulatory networks, etc. amongst many others. These models are interpretable as they are able to capture the causal relationships between different features .They can work efficiently with small data and also deal with missing data which gives it more power than conventional machine learning and deep learning models.

    In this session, we will discuss concepts of conditional independence, d- separation , Hammersley Clifford theorem , Bayes theorem, Expectation Maximization and Variable Elimination. There will be a code walk through of simple case study.

  • Liked Maryam Jahanshahi
    keyboard_arrow_down

    Maryam Jahanshahi - Applying Dynamic Embeddings in Natural Language Processing to Analyze Text over Time

    Maryam Jahanshahi
    Maryam Jahanshahi
    Research Scientist
    TapRecruit
    schedule 8 months ago
    Sold Out!
    45 Mins
    Case Study
    Intermediate

    Many data scientists are familiar with word embedding models such as word2vec, which capture semantic similarity of words in a large corpus. However, word embeddings are limited in their ability to interrogate a corpus alongside other context or over time. Moreover, word embedding models either need significant amounts of data, or tuning through transfer learning of a domain-specific vocabulary that is unique to most commercial applications.

    In this talk, I will introduce exponential family embeddings. Developed by Rudolph and Blei, these methods extend the idea of word embeddings to other types of high-dimensional data. I will demonstrate how they can be used to conduct advanced topic modeling on datasets that are medium-sized, which are specialized enough to require significant modifications of a word2vec model and contain more general data types (including categorical, count, continuous). I will discuss how my team implemented a dynamic embedding model using Tensor Flow and our proprietary corpus of job descriptions. Using both categorical and natural language data associated with jobs, we charted the development of different skill sets over the last 3 years. I will specifically focus the description of results on how tech and data science skill sets have developed, grown and pollinated other types of jobs over time.

  • Liked Saurabh Jha
    keyboard_arrow_down

    Saurabh Jha / Rohan Shravan / Usha Rengaraju - Hands on Deep Learning for Computer Vision

    480 Mins
    Workshop
    Intermediate

    Computer Vision has lots of applications including medical imaging, autonomous
    vehicles, industrial inspection and augmented reality. Use of Deep Learning for
    computer Vision can be categorized into multiple categories for both images and
    videos – Classification, detection, segmentation & generation.
    Having worked in Deep Learning with a focus on Computer Vision have come
    across various challenges and learned best practices over a period
    experimenting with cutting edge ideas. This workshop is for Data Scientists &
    Computer Vision Engineers whose focus is deep learning. We will cover state of
    the art architectures for Image Classification, Segmentation and practical tips &
    tricks to train a deep neural network models. It will be hands on session where
    every concepts will be introduced through python code and our choice of deep
    learning framework will be PyTorch v1.0 and Keras.

    Given we have only 8 hours, we will cover the most important fundamentals,
    current techniques and avoid anything which is obsolete or not being used by
    state-of-art algorithms. We will directly start with building the intuition for
    Convolutional Neural Networks, and focus on core architectural problems. We
    will try and answer some of the hard questions like how many layers must be
    there in a network, how many kernels should we add. We will look at the
    architectural journey of some of the best papers and discover what each brought
    into the field of Vision AI, making today’s best networks possible. We will cover 9
    different kinds of Convolutions which will cover a spectrum of problems like
    running DNNs on constrained hardware, super-resolution, image segmentation,
    etc. The concepts would be good enough for all of us to move to harder problems
    like segmentation or super-resolution later, but we will focus on object
    recognition, followed by object detections. We will build our networks step by
    step, learning how optimizations techniques actually improve our networks and
    exactly when should we introduce them. We hope the leave you in confidence
    which will help you read research papers like your second nature. Given we have
    8 hours, and we want the sessions to be productive, we will instead of introducing

    all the problems and solutions, focus on the fundamentals of modern deep neural
    networks.

  • Liked Aamir Nazir
    keyboard_arrow_down

    Aamir Nazir - DeepMind Alpha Fold 101

    Aamir Nazir
    Aamir Nazir
    Researcher
    -
    schedule 5 months ago
    Sold Out!
    45 Mins
    Talk
    Intermediate

    "Today we’re excited to share DeepMind’s first significant milestone in demonstrating how artificial intelligence research can drive and accelerate new scientific discoveries. With a strongly interdisciplinary approach to our work, DeepMind has brought together experts from the fields of structural biology, physics, and machine learning to apply cutting-edge techniques to predict the 3D structure of a protein based solely on its genetic sequence." source: https://deepmind.com/blog/alphafold/

    Over the past five decades, scientists have been able to determine shapes of proteins in labs using experimental techniques like cryo-electron microscopy, nuclear magnetic resonance or X-ray crystallography, but each method depends on a lot of trial and error, which can take years and cost tens of thousands of dollars per structure. This is why biologists are turning to AI methods as an alternative to this long and laborious process for difficult proteins.

    Recently released by Deepmind, Alpha fold, beat top pharmaceutical companies with 100K+ employees like Pfizer, Novartis, etc. at predicting protein structures in the CASP13 challenge. It outperformed all the other competitors and emerged first with a huge difference of correctly predicting 25 proteins correctly whereas the second place winner only predicted 9 of them correctly and that too with only 29K of the 129K present data about different proteins

    This research is the greatest breakthrough in this field which will be able to predict how proteins fold for the formation of different types of proteins for different functions. This is important because this could lead to a better understanding and possibly a cure for diseases like Alzheimer's, mad cow's disease etc. because these diseases are believed to be caused due to malfunction in the folding of the proteins in the body.

    The architecture for the network was simple, on a high level it constituted of residual convolutional neural network and gradient descent to optimize full protein features in the end.

    The audience from this talk will be able to learn about how to reproduce the architecture of the Alpha Fold and also some basics about how different proteins strands affect the body and function of the proteins. This talk will be mostly on the technical side of the Alpha Fold.

  • Liked Aamir Nazir
    keyboard_arrow_down

    Aamir Nazir - All-out Deep Learning - 101

    Aamir Nazir
    Aamir Nazir
    Researcher
    -
    schedule 7 months ago
    Sold Out!
    45 Mins
    Talk
    Beginner

    In This Talk, We will be discussing different problems and The different focus areas of Deep Learning. This Session will focus on intermediate learners looking to learn deeper in Deep Learning. We, Will, Be taking the different Tasks and seeing which deep Neural Network Architecture can solve this problem and also learn about the different neural network architectures for the same task.

  • Liked Aamir Nazir
    keyboard_arrow_down

    Aamir Nazir - Evolution Of Image Recognition And Object Segmentation: From Apes To Machines

    Aamir Nazir
    Aamir Nazir
    Researcher
    -
    schedule 7 months ago
    Sold Out!
    45 Mins
    Talk
    Intermediate

    From a long time, we have thought of how we could harness the amazing gift of vision because we could achieve greatness to new heights and open up endless possibilities like cars that can drive themselves. Along the path of harnessing this power, we have found numerous algorithms. In this talk, we will cover and see all the latest trends in this field, the architectures of each algorithm and evolution of different algorithms of image recognition task. we will cover it all from The dinosaur age of Image recognition to the cyborg age of object segmentation and further, CNNs to R-CNNs to Mask-RCNN. A close analysis performance-wise of these models