Adversarial Attacks on Neural Networks

Since 2014, adversarial examples in Deep Neural Networks have come a long way. This talk aims to be a comprehensive introduction to adversarial attacks including various threat models (black box/white box), approaches to create adversarial examples and will include demos. The talk will dive deep into the intuition behind why adversarial examples exhibit the properties they do — in particular, transferability across models and training data, as well as high confidence of incorrect labels. Finally, we will go over various approaches to mitigate these attacks (Adversarial Training, Defensive Distillation, Gradient Masking, etc.) and discuss what seems to have worked best over the past year.


Outline/Structure of the Talk

We will follow the following outline for the presentation:

  • What are Adversarial attacks?
  • CIA Model of Security
  • Threat models
  • Examples and demos of Adversarial attacks
  • Proposed Defenses against adversarial attacks
  • Intuition behind Adversarial attacks
  • What’s next?

Learning Outcome

This talk is motivated by the question: Are adversarial examples simply a fun toy problem for researchers or an example of a deeper and more chronic frailty in our models? The learning outcome for attendees from this talk is to realize that Deep Learning Models are just another tool, susceptible to adversarial attacks. These can have huge implications, especially in a world with self-driving cars and other automation.

Target Audience

Deep Learning Practitioners or students interested in learning more about an up-and-coming area of research in this field.

Prerequisites for Attendees

A beginner-level understanding of how Deep Neural Networks work.

schedule Submitted 1 year ago

Public Feedback

    • Liked Dat Tran

      Dat Tran - Image ATM - Image Classification for Everyone

      Dat Tran
      Dat Tran
      Head of AI
      Axel Springer AI
      schedule 1 year ago
      Sold Out!
      45 Mins

      At we store and display millions of images. Our gallery contains pictures of all sorts. You’ll find there vacuum cleaners, bike helmets as well as hotel rooms. Working with huge volume of images brings some challenges: How to organize the galleries? What exactly is in there? Do we actually need all of it?

      To tackle these problems you first need to label all the pictures. In 2018 our Data Science team completed four projects in the area of image classification. In 2019 there were many more to come. Therefore, we decided to automate this process by creating a software we called Image ATM (Automated Tagging Machine). With the help of transfer learning, Image ATM enables the user to train a Deep Learning model without knowledge or experience in the area of Machine Learning. All you need is data and spare couple of minutes!

      In this talk we will discuss the state-of-art technologies available for image classification and present Image ATM in the context of these technologies. We will then give a crash course of our product where we will guide you through different ways of using it - in shell, on Jupyter Notebook and on the Cloud. We will also talk about our roadmap for Image ATM.

    • Liked Dipanjan Sarkar

      Dipanjan Sarkar - Explainable Artificial Intelligence - Demystifying the Hype

      45 Mins

      The field of Artificial Intelligence powered by Machine Learning and Deep Learning has gone through some phenomenal changes over the last decade. Starting off as just a pure academic and research-oriented domain, we have seen widespread industry adoption across diverse domains including retail, technology, healthcare, science and many more. More than often, the standard toolbox of machine learning, statistical or deep learning models remain the same. New models do come into existence like Capsule Networks, but industry adoption of the same usually takes several years. Hence, in the industry, the main focus of data science or machine learning is more ‘applied’ rather than theoretical and effective application of these models on the right data to solve complex real-world problems is of paramount importance.

      A machine learning or deep learning model by itself consists of an algorithm which tries to learn latent patterns and relationships from data without hard-coding fixed rules. Hence, explaining how a model works to the business always poses its own set of challenges. There are some domains in the industry especially in the world of finance like insurance or banking where data scientists often end up having to use more traditional machine learning models (linear or tree-based). The reason being that model interpretability is very important for the business to explain each and every decision being taken by the model.However, this often leads to a sacrifice in performance. This is where complex models like ensembles and neural networks typically give us better and more accurate performance (since true relationships are rarely linear in nature).We, however, end up being unable to have proper interpretations for model decisions.

      To address and talk about these gaps, I will take a conceptual yet hands-on approach where we will explore some of these challenges in-depth about explainable artificial intelligence (XAI) and human interpretable machine learning and even showcase with some examples using state-of-the-art model interpretation frameworks in Python!

    • 90 Mins

      Machine learning and deep learning have been rapidly adopted in various spheres of medicine such as discovery of drug, disease diagnosis, Genomics, medical imaging and bioinformatics for translating biomedical data into improved human healthcare. Machine learning/deep learning based healthcare applications assist physicians to make faster, cheaper and more accurate diagnosis.

      We have successfully developed three deep learning based healthcare applications and are currently working on two more healthcare related projects. In this workshop, we will discuss one healthcare application titled "Deep Learning based Craniofacial Distance Measurement for Facial Reconstructive Surgery" which is developed by us using TensorFlow. Craniofacial distances play important role in providing information related to facial structure. They include measurements of head and face which are to be measured from image. They are used in facial reconstructive surgeries such as cephalometry, treatment planning of various malocclusions, craniofacial anomalies, facial contouring, facial rejuvenation and different forehead surgeries in which reliable and accurate data are very important and cannot be compromised.

      Our discussion on healthcare application will include precise problem statement, the major steps involved in the solution (deep learning based face detection & facial landmarking and craniofacial distance measurement), data set, experimental analysis and challenges faced & overcame to achieve this success. Subsequently, we will provide hands-on exposure to implement this healthcare solution using TensorFlow. Finally, we will briefly discuss the possible extensions of our work and the future scope of research in healthcare sector.

    • Liked Dr. Rahee Walambe

      Dr. Rahee Walambe / Vishal Gokhale - Processing Sequential Data using RNNs

      480 Mins

      Data that forms the basis of many of our daily activities like speech, text, videos has sequential/temporal dependencies. Traditional deep learning models, being inadequate to model this connectivity needed to be made recurrent before they brought technologies such as voice assistants (Alexa, Siri) or video based speech translation (Google Translate) to a practically usable form by reducing the Word Error Rate (WER) significantly. RNNs solve this problem by adding internal memory. The capacities of traditional neural networks are bolstered with this addition and the results outperform the conventional ML techniques wherever the temporal dynamics are more important.
      In this full-day immersive workshop, participants will develop an intuition for sequence models through hands-on learning along with the mathematical premise of RNNs.

    • Liked Favio Vázquez

      Favio Vázquez - Complete Data Science Workflows with Open Source Tools

      90 Mins

      Cleaning, preparing , transforming, exploring data and modeling it's what we hear all the time about data science, and these steps maybe the most important ones. But that's not the only thing about data science, in this talk you will learn how the combination of Apache Spark, Optimus, the Python ecosystem and Data Operations can form a whole framework for data science that will allow you and your company to go further, and beyond common sense and intuition to solve complex business problems.

    • 45 Mins

      Recent advancements in AI are proving beneficial in development of applications in various spheres of healthcare sector such as microbiological analysis, discovery of drug, disease diagnosis, Genomics, medical imaging and bioinformatics for translating a large-scale data into improved human healthcare. Automation in healthcare using machine learning/deep learning assists physicians to make faster, cheaper and more accurate diagnoses.

      Due to increasing availability of electronic healthcare data (structured as well as unstructured data) and rapid progress of analytics techniques, a lot of research is being carried out in this area. Popular AI techniques include machine learning/deep learning for structured data and natural language processing for unstructured data. Guided by relevant clinical questions, powerful deep learning techniques can unlock clinically relevant information hidden in the massive amount of data, which in turn can assist clinical decision making.

      We have successfully developed three deep learning based healthcare applications using TensorFlow and are currently working on three more healthcare related projects. In this demonstration session, first we shall briefly discuss the significance of deep learning for healthcare solutions. Next, we will demonstrate two deep learning based healthcare applications developed by us. The discussion of each application will include precise problem statement, proposed solution, data collected & used, experimental analysis and challenges encountered & overcame to achieve this success. Finally, we will briefly discuss the other applications on which we are currently working and the future scope of research in this area.

    • Liked Maryam Jahanshahi

      Maryam Jahanshahi - Applying Dynamic Embeddings in Natural Language Processing to Analyze Text over Time

      45 Mins
      Case Study

      Many data scientists are familiar with word embedding models such as word2vec, which capture semantic similarity of words in a large corpus. However, word embeddings are limited in their ability to interrogate a corpus alongside other context or over time. Moreover, word embedding models either need significant amounts of data, or tuning through transfer learning of a domain-specific vocabulary that is unique to most commercial applications.

      In this talk, I will introduce exponential family embeddings. Developed by Rudolph and Blei, these methods extend the idea of word embeddings to other types of high-dimensional data. I will demonstrate how they can be used to conduct advanced topic modeling on datasets that are medium-sized, which are specialized enough to require significant modifications of a word2vec model and contain more general data types (including categorical, count, continuous). I will discuss how my team implemented a dynamic embedding model using Tensor Flow and our proprietary corpus of job descriptions. Using both categorical and natural language data associated with jobs, we charted the development of different skill sets over the last 3 years. I will specifically focus the description of results on how tech and data science skill sets have developed, grown and pollinated other types of jobs over time.

    • Liked Saurabh Jha

      Saurabh Jha / Rohan Shravan / Usha Rengaraju - Hands on Deep Learning for Computer Vision

      480 Mins

      Computer Vision has lots of applications including medical imaging, autonomous
      vehicles, industrial inspection and augmented reality. Use of Deep Learning for
      computer Vision can be categorized into multiple categories for both images and
      videos – Classification, detection, segmentation & generation.
      Having worked in Deep Learning with a focus on Computer Vision have come
      across various challenges and learned best practices over a period
      experimenting with cutting edge ideas. This workshop is for Data Scientists &
      Computer Vision Engineers whose focus is deep learning. We will cover state of
      the art architectures for Image Classification, Segmentation and practical tips &
      tricks to train a deep neural network models. It will be hands on session where
      every concepts will be introduced through python code and our choice of deep
      learning framework will be PyTorch v1.0 and Keras.

      Given we have only 8 hours, we will cover the most important fundamentals,
      current techniques and avoid anything which is obsolete or not being used by
      state-of-art algorithms. We will directly start with building the intuition for
      Convolutional Neural Networks, and focus on core architectural problems. We
      will try and answer some of the hard questions like how many layers must be
      there in a network, how many kernels should we add. We will look at the
      architectural journey of some of the best papers and discover what each brought
      into the field of Vision AI, making today’s best networks possible. We will cover 9
      different kinds of Convolutions which will cover a spectrum of problems like
      running DNNs on constrained hardware, super-resolution, image segmentation,
      etc. The concepts would be good enough for all of us to move to harder problems
      like segmentation or super-resolution later, but we will focus on object
      recognition, followed by object detections. We will build our networks step by
      step, learning how optimizations techniques actually improve our networks and
      exactly when should we introduce them. We hope the leave you in confidence
      which will help you read research papers like your second nature. Given we have
      8 hours, and we want the sessions to be productive, we will instead of introducing

      all the problems and solutions, focus on the fundamentals of modern deep neural

    • Liked Tanay Pant

      Tanay Pant - Machine data: how to handle it better?

      Tanay Pant
      Tanay Pant
      Developer Advocate
      schedule 1 year ago
      Sold Out!
      45 Mins

      The rise of IoT and smart infrastructure has led to the generation of massive amounts of complex data. Traditional solutions struggle to cope with this shift, leading to a decrease in performance and an increase in cost. In this session, I will talk about time-series data, machine data, the challenges of working with this kind of data, ingestion of this data using data from NYC cabs and running real time queries to visualise the data and gather insights. By the end of this session, you will be able to set up a highly scalable data pipeline for complex time series data with real time query performance.