Adversarial Attacks on Neural Networks

Since 2014, adversarial examples in Deep Neural Networks have come a long way. This talk aims to be a comprehensive introduction to adversarial attacks including various threat models (black box/white box), approaches to create adversarial examples and will include demos. The talk will dive deep into the intuition behind why adversarial examples exhibit the properties they do — in particular, transferability across models and training data, as well as high confidence of incorrect labels. Finally, we will go over various approaches to mitigate these attacks (Adversarial Training, Defensive Distillation, Gradient Masking, etc.) and discuss what seems to have worked best over the past year.

 
1 favorite thumb_down thumb_up 0 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Outline/Structure of the Talk

We will follow the following outline for the presentation:

  • What are Adversarial attacks?
  • CIA Model of Security
  • Threat models
  • Examples and demos of Adversarial attacks
  • Proposed Defenses against adversarial attacks
  • Intuition behind Adversarial attacks
  • What’s next?

Learning Outcome

This talk is motivated by the question: Are adversarial examples simply a fun toy problem for researchers or an example of a deeper and more chronic frailty in our models? The learning outcome for attendees from this talk is to realize that Deep Learning Models are just another tool, susceptible to adversarial attacks. These can have huge implications, especially in a world with self-driving cars and other automation.

Target Audience

Deep Learning Practitioners or students interested in learning more about an up-and-coming area of research in this field.

Prerequisites for Attendees

A beginner-level understanding of how Deep Neural Networks work.

schedule Submitted 1 month ago

Public Feedback

comment Suggest improvements to the Speaker