The goal with this class is to introduce participants to adversarial machine learning, including research areas related with security, privacy, and machine learning. This is a field that requires some mathematical maturity to understand and value the contributions of the papers. The course will provide some basic background information to the participants as well as in-depth discussion about the state of the art research. 

8/28  Course Overview

  • For the first class, I will spend some time on explaining background ideas about general adversarial machine learning, including the fundamental causes of the problem and current research status.

10/11  Guest Lecture

​           Jim Kapinski (Toyota)


Paper Presentation Guidelines:

* For the presenter:

For each presented paper, please provide an overview of the paper and in-depth discussion, including: What problem the paper tries to address and how? How does it fit into the broader context (e.g., related work)? What are the positive and negative aspects of the paper/approach? What new research questions does it raise?

* For the audience:

Please read the paper before each class and put your questions in the collected googledoc so that the presenter can try to answer it, or we can discuss the questions in the class as well.

 Final Report

Course Schedule (Tentative)

11/1  Guest Lecture

​           Tianyin Xu (UIUC)

  • Robustness testing for Deep Neural Networks

Bo Li

Goal of the class

10/9  Guest Lecture

           Gerald Friedland (UC Berkeley)

CS 598:  Special Topics on Adversarial Machine Learning

9/13  Guest Lecture

         Yevgeniy Vorobeychik (Washington University in St. Louis)

10/4 Poisoning Attacks Against Machine Learning Models