ECE 663: Machine Learning in Adversarial Settings (Spring 2023)
ECE 663: Machine Learning in Adversarial Settings (Spring 2023)
Instructor
Neil Gong, neil.gong@duke.eduTeaching Assistant
Hongbin Liu, hongbin.liu@duke.eduQitong Gao, qitong.gao@duke.edu
Lectures
Time: MW 3:30pm - 4:45pmLocation: Teer 106
Office Hours
Time: Thursday 1:00pm - 2:00pmLocation: 413 Wilkinson Building
Tentative Schedule
01/11 Course overview (Slides)01/16 Holiday
01/18 Adversarial examples (white-box) (Slides) 01/23 Adversarial examples (black-box) (Slides)
- HopSkipJumpAttack: A Query-Efficient Decision-Based Attack
- Optional: Delving into Transferable Adversarial Examples and Black-box Attacks
- Towards Deep Learning Models Resistant to Adversarial Attacks
- Optional: Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
- Optional: Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
- Poisoning Attacks against Support Vector Machines
- Optional: Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
- Attacking Graph-based Classification via Manipulating the Graph Structure
- Optional: Adversarial Attacks on Neural Networks for Graph Data
- Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
- Speakers: Yixin Zhang and Pingcheng Jian
- Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks
- Optional: Certified Robustness of Nearest Neighbors against Data Poisoning Attacks
- Optional: Certified Defenses for Data Poisoning Attacks
- BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
- Optional: Trojaning Attack on Neural Networks
- Speakers: Haocheng Meng, Fakrul Islam Tushar, Yoo Bin Shin, and Yanting Wang
- BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
- Optional: Poisoning and Backdooring Contrastive Learning
- Speakers: Yi Gao, Bofeng Chen, Zhicheng Guo, and Danyu Sun
- Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
- Optional: STRIP: A Defence Against Trojan Attacks on Deep Neural Networks
- Speakers: Haoming Yang, Oded Schlesinger, Rucha Patil, and Angikar Ghosal
- Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation
- Optional: Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
- Stealing Machine Learning Models via Prediction APIs
- Optional: Stealing Hyperparameters in Machine Learning
- Speakers: Minxue Tang, Yitu Wang, and Xueying Wu
- Prediction Poisoning: Utility-Constrained Defenses Against Model Stealing Attacks
- Optional: PRADA: Protecting Against DNN Model Stealing Attacks
03/15 Spring recess
03/20 Intellectual property protection (Slides)
- Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring
- Optional: Certified Neural Network Watermarks with Randomized Smoothing
- Speakers: Minke Yu, Ziling Yuan, and Zhihao Dou
- Membership Inference Attacks against Machine Learning Models
- Optional: Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
- Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
- Optional: Deep Leakage from Gradients
- Speakers: Pragya Sharma, Tianyi Hu, and William Chown
- Generative Adversarial Nets
- Stable Diffusion
- ChatGPT
- Speakers: Grant Costa, Suhang Jiang, Martin Kuo, and Roger Chang
- Group 1: Fakrul Islam Tushar, Haocheng Meng, Yanting Wang, Yoo Bin Shin
- Group 2: Yixin Zhang, Pingcheng Jian
- Group 3: Roger Chang, Martin Kuo, Grant Costa, Suhang Jiang
- Group 4: Bofeng Chen, Yi Gao, Danyu Sun, Zhicheng Guo
- Group 5: Minke Yu, Ziling Yuan, Zhihao Dou
- Group 6: Minxue Tang, Yitu Wang, Xueying Wu
- Group 7: Haoming Yang, Oded Schlesinger, Rucha Patil, Angikar Ghosal
- Group 8: Tianyi Hu
- Group 9: William Chown, Pragya Sharma
Course Description
Machine learning is being widely deployed in many aspects of our society. Our vision is that machine learning systems will become a new attack surface and attackers will exploit the vulnerabilities in machine learning algorithms and systems to subvert their security and privacy. In this course, we will discuss security and privacy attacks to machine learning systems and state-of-the-art defenses against them.Class Format
The class is paper reading, lecture, discussion, and project oriented. We will focus on one topic in a lecture. Students are expected to read the suggested papers on the topic and send comments to a specified email address by the end of the day before the lecture. Students are expected to lead a lecture on a chosen topic, complete a class project, present the project, and write a project report.Group
Students can form groups of at most 4 students for the lecture and class project.Deadlines
Reading assignments- Sunday and Tuesday 11:59pm. Send comments to adversarialmlduke@gmail.com. Please send your comments to all papers in a single email thread.
- A group sends three preferred dates to adversarialmlduke@gmail.com by 11:59pm, 01/25.
- 02/01: project proposal due.
- 03/15: milestone report due.
- 04/17, 04/19: project presentation.
- 04/30: final project report due.
Grading Policy
50% project25% reading assignment
10% class participation
15% class presentation