ECE 663: Machine Learning in Adversarial Settings (Spring 2023)


Instructor

Neil Gong, neil.gong@duke.edu

Teaching Assistant

Hongbin Liu, hongbin.liu@duke.edu
Qitong Gao, qitong.gao@duke.edu

Lectures

Time: MW 3:30pm - 4:45pm
Location: Teer 106

Office Hours

Time: Thursday 1:00pm - 2:00pm
Location: 413 Wilkinson Building

Tentative Schedule

01/11    Course overview (Slides)

01/16    Holiday

01/18    Adversarial examples (white-box) (Slides) 01/23    Adversarial examples (black-box) (Slides) 01/25    Empirical defenses against adversarial examples (Slides) 01/30    Certified defenses against adversarial examples (Slides) 02/01    Data poisoning attacks to classifiers (Slides) 02/06    Data poisoning attacks to recommender systems (Slides) 02/08    Data poisoning attacks to graph-based methods (Slides) 02/13    Model poisoning attacks to federated learning (Slides) 02/15    Certified defenses against data poisoning attacks (Slides) 02/20    Backdoor attacks to classifiers (Slides) 02/22    Backdoor attacks to pre-trained foundation models (Slides) 02/27    Empirical defenses against backdoor attacks (Slides) 03/01    Debugging data poisoning and backdoor attacks 03/06    Model stealing attacks (Slides) 03/08    Defenses against model stealing attacks 03/13    Spring recess

03/15    Spring recess

03/20    Intellectual property protection (Slides) 03/22    Privacy attacks - model inversion and membership inference (Slides) 03/27    Privacy attacks to federated learning (Slides) 03/29    Privacy-preserving machine learning via statistical methods (Slides) 04/03    Privacy-preserving machine learning via cryptographic methods (Slides) 04/05    Data tracing in machine learning (Slides) 04/10    Misuse of machine learning (Slides) 04/12    Deepfakes 04/17    Project presentation 04/19    Project presentation

Course Description

Machine learning is being widely deployed in many aspects of our society. Our vision is that machine learning systems will become a new attack surface and attackers will exploit the vulnerabilities in machine learning algorithms and systems to subvert their security and privacy. In this course, we will discuss security and privacy attacks to machine learning systems and state-of-the-art defenses against them.

Class Format

The class is paper reading, lecture, discussion, and project oriented. We will focus on one topic in a lecture. Students are expected to read the suggested papers on the topic and send comments to a specified email address by the end of the day before the lecture. Students are expected to lead a lecture on a chosen topic, complete a class project, present the project, and write a project report.

Group

Students can form groups of at most 4 students for the lecture and class project.

Deadlines

Reading assignments Choosing a topic for lecture Class project

Grading Policy

50% project
25% reading assignment
10% class participation
15% class presentation