Trustworthy Machine Learning
Machine learning (ML) and artificial intelligence are being widely deployed in many aspects of our society. Traditional ML research mainly focuses on developing new methods to optimize accuracy and efficiency. Security and privacy of ML are largely ignored, though they are key for safety and security critical application domains such as self-driving cars, health care, and cybersecurity. We aim to build provably secure and privacy-preserving ML. In ML systems, both users and model providers desire confidentiality/privacy: users desire privacy of their confidential training and testing data, while model providers desire confidentiality of their proprietary models, learning algorithms, and training data, as they represent intellectual property. We are interested in protecting confidentiality/privacy for both users and model providers. For security, an attacker's goal is to manipulate an ML system such that the system makes predictions as the attacker desires. An attacker can manipulate the training phase and/or the testing phase to achieve this goal. We aim to build ML systems that are provably secure against such attacks.
Publications
Intellectual property of machine learning
- Minxue Tang, Anna Dai, Louis DiValentin, Aolin Ding, Amin Hass, Neil Zhenqiang Gong, and Yiran Chen. "ModelGuard: Information-Theoretic Defense Against Model Extraction Attacks". In USENIX Security Symposium, 2024.
- Yupei Liu, Jinyuan Jia, Hongbin Liu, and Neil Zhenqiang Gong. "StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning". In ACM Conference on Computer and Communications Security (CCS), 2022.
- Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, and Yang Zhang. "Stealing Links from Graph Neural Networks". In USENIX Security Symposium, 2021.
- Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. "IPGuard: Protecting Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary". In ACM ASIA Conference on Computer and Communications Security (ASIACCS), 2021. [Code]
- Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. "Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes". In ACM ASIA Conference on Computer and Communications Security (ASIACCS), 2021.
- Binghui Wang and Neil Zhenqiang Gong. "Stealing Hyperparameters in Machine Learning". In IEEE Symposium on Security and Privacy (IEEE S&P), 2018.
This paper demonstrates that an adversary can steal the hyperparameters used to train a machine learning model by strategically querying the model.
Membership inference attacks and defenses
- Hongbin Liu*, Jinyuan Jia*, Wenjie Qu, and Neil Zhenqiang Gong. "EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning". In ACM Conference on Computer and Communications Security (CCS), 2021. *Equal contribution
- Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, and Yang Zhang. "Stealing Links from Graph Neural Networks". In USENIX Security Symposium, 2021.
- Hongbin Liu, Jinyuan Jia, and Neil Zhenqiang Gong. "On the Intrinsic Differential Privacy of Bagging". In International Joint Conference on Artificial Intelligence (IJCAI), 2021.
- Bo Hui, Yuchen Yang, Haolin Yuan, Philippe Burlina, Neil Zhenqiang Gong, and Yinzhi Cao. "Practical Blind Membership Inference Attack via Differential Comparisons". In ISOC Network and Distributed System Security Symposium (NDSS), 2021.
- Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, and Neil Zhenqiang Gong. "MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples". In ACM Conference on Computer and Communications Security (CCS), 2019.
Code and data are available [here].
Attribute inference attacks and defenses
- Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. "Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes". In ACM ASIA Conference on Computer and Communications Security (ASIACCS), 2021.
Jinyuan Jia, Neil Zhenqiang Gong. "AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning". In USENIX Security Symposium, 2018.
Code and data are available [here].
-
Neil Zhenqiang Gong, Bin Liu. "Attribute Inference Attacks in Online Social Networks". ACM Transactions on Privacy and Security (TOPS), 21(1), 2018.
-
Jinyuan Jia, Binghui Wang, Le Zhang, Neil Zhenqiang Gong. "AttriInfer: Inferring User Attributes in Online Social Networks Using Markov Random Fields". In International World Wide Web Conference (WWW), 2017.
-
Neil Zhenqiang Gong, Bin Liu. "You are Who You Know and How You Behave: Attribute Inference Attacks via Users' Social Friends and Behaviors". In USENIX Security Symposium, 2016.
-
Neil Zhenqiang Gong, Ameet Talwalkar, Lester Mackey, Ling Huang, Richard Shin, Emil Stefanov, Elaine Shi, Dawn Song. "Joint Link Prediction and Attribute Inference using a Social-Attribute Network". ACM Transactions on Intelligent Systems and Technology (TIST), 5(2), 2014.
Security at training phase: poisoning attacks, backdoor attacks, and defenses
Provably secure defenses
- Jinyuan Jia, Yupei Liu, Xiaoyu Cao, and Neil Zhenqiang Gong. "Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks". In AAAI Conference on Artificial Intelligence (AAAI), 2022.
- Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. "Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks". In AAAI Conference on Artificial Intelligence (AAAI), 2021.
Code and data are available [here] - Binghui Wang, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. "On Certifying Robustness against Backdoor Attacks via Randomized Smoothing". CVPR 2020 Workshop on Adversarial Machine Learning in Computer Vision, 2020.
Poisoning and backdoor attacks to self-supervised learning and defenses
- Hongbin Liu, Michael K Reiter, and Neil Zhenqiang Gong. "Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models". In USENIX Security Symposium, 2024.
- Hongbin Liu, Jinyuan Jia, and Neil Zhenqiang Gong. "PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning". In USENIX Security Symposium, 2022.
- Jinyuan Jia, Yupei Liu, and Neil Zhenqiang Gong. "BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning". In IEEE Symposium on Security and Privacy, 2022.
Poisoning attacks to federated analytics and defenses
- Secure federated learning [code and data] [slides] [talk on YouTube]
- Xiaoyu Cao, Jinyuan Jia, Zaixi Zhang, and Neil Zhenqiang Gong. "FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information". IEEE Symposium on Security and Privacy, 2023.
- Xiaoyu Cao, Zaixi Zhang, Jinyuan Jia, and Neil Zhenqiang Gong. "FLCert: Provably Secure Federated Learning against Poisoning Attacks". IEEE Transactions on Information Forensics and Security (TIFS), accepted, 2022.
- Zaixi Zhang, Xiaoyu Cao, Jinayuan Jia, and Neil Zhenqiang Gong. "FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients". In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2022.
- Xiaoyu Cao and Neil Zhenqiang Gong. "MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients". In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022.
- Yongji Wu, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. "Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data". In USENIX Security Symposium, 2022.
- Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. "Data Poisoning Attacks to Local Differential Privacy Protocols". In USENIX Security Symposium, 2021.
- Minghong Fang, Minghao Sun, Qi Li, Neil Zhenqiang Gong, Jin Tian, and Jia Liu. "Data Poisoning Attacks and Defenses to Crowdsourcing Systems". In The Web Conference (WWW), 2021.
- Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. "Provably Secure Federated Learning against Malicious Clients". In AAAI Conference on Artificial Intelligence (AAAI), 2021.
- Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong. "FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping". In ISOC Network and Distributed System Security Symposium (NDSS), 2021. [Code]
-
Minghong Fang*, Xiaoyu Cao*, Jinyuan Jia, and Neil Zhenqiang Gong. "Local Model Poisoning Attacks to Byzantine-Robust Federated Learning". In USENIX Security Symposium, 2020. [Code] *Equal contribution
This paper demonstrates that malicious clients can substantially reduce the testing accuracy of the learned model via sending strategically poisoned model updates (or local models) to the server.
Poisoning attacks to graph-based methods and defenses
- Binghui Wang, Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. "Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation". In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2021. [Code]
- Zaixi Zhang*, Jinyuan Jia*, Binghui Wang, and Neil Zhenqiang Gong. "Backdoor Attacks to Graph Neural Networks". In ACM Symposium on Access Control Models and Technologies (SACMAT), 2021. *Equal contribution
- Jinyuan Jia*, Binghui Wang*, Xiaoyu Cao, and Neil Zhenqiang Gong. "Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing". In The Web Conference (WWW), 2020. *Equal contribution
- Binghui Wang and Neil Zhenqiang Gong. "Attacking Graph-based Classification via Manipulating the Graph Structure". In ACM Conference on Computer and Communications Security (CCS), 2019.
Poisoning attacks to recommender systems and defenses
- Jinyuan Jia, Yupei Liu, Yuepeng Hu, and Neil Zhenqiang Gong. "PORE: Provably Robust Recommender Systems against Data Poisoning Attacks". In USENIX Security Symposium, 2023.
- Hai Huang, Jiaming Mu, Neil Zhenqiang Gong, Qi Li, Bin Liu, and Mingwei Xu. "Data Poisoning Attacks to Deep Learning Based Recommender Systems". In ISOC Network and Distributed System Security Symposium (NDSS), 2021.
- Minghong Fang, Neil Zhenqiang Gong, and Jia Liu. "Influence Function based Data Poisoning Attacks to Top-N Recommender Systems". In The Web Conference (WWW), 2020.
-
Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, and Jia Liu. "Poisoning Attacks to Graph-Based Recommender Systems". In Annual Computer Security Applications Conference (ACSAC), 2018.
-
Guolei Yang, Neil Zhenqiang Gong, and Ying Cai. "Fake Co-visitation Injection Attacks to Recommender Systems". In ISOC Network and Distributed System Security Symposium (NDSS), 2017.
Security at testing phase (i.e., adversarial examples): attacks, defenses, and applications for privacy protection
Provably secure defenses against adversarial examples
- Jinghuai Zhang, Jinyuan Jia, Hongbin Liu, and Neil Zhenqiang Gong. "PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees". In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
- Wenjie Qu, Jinyuan Jia, and Neil Zhenqiang Gong. "REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service". In ISOC Network and Distributed System Security Symposium (NDSS), 2023.
- Jinyuan Jia, Wenjie Qu, and Neil Zhenqiang Gong. "MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples". In Conference on Neural Information Processing Systems (NeurIPS), 2022.
- Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, and Neil Zhenqiang Gong. "Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations". In International Conference on Learning Representations (ICLR), 2022.
Hongbin Liu*, Jinyuan Jia*, and Neil Zhenqiang Gong. "PointGuard: Provably Robust 3D Point Cloud Classification". In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021. *Equal contribution
-
Jinyuan Jia, Xiaoyu Cao, Binghui Wang, and Neil Zhenqiang Gong. "Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing". In International Conference on Learning Representations (ICLR), 2020.
-
Xiaoyu Cao and Neil Zhenqiang Gong. "Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification". In Annual Computer Security Applications Conference (ACSAC), 2017.
Applications of adversarial examples for privacy protection
- Jinyuan Jia and Neil Zhenqiang Gong. "Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges". Adaptive Autonomous Secure Cyber Systems. Springer, Cham, 2020.
- Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, and Neil Zhenqiang Gong. "MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples". In ACM Conference on Computer and Communications Security (CCS), 2019.
Code and data are available [here]. - Jinyuan Jia and Neil Zhenqiang Gong. "AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning". In USENIX Security Symposium, 2018.
Code and data are available [here].
Secure and privacy-preserving self-supervised/contrastive learning
- Wenjie Qu, Jinyuan Jia, and Neil Zhenqiang Gong. "REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service". In ISOC Network and Distributed System Security Symposium (NDSS), 2023.
- Yupei Liu, Jinyuan Jia, Hongbin Liu, and Neil Zhenqiang Gong. "StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning". In ACM Conference on Computer and Communications Security (CCS), 2022.
- Hongbin Liu, Jinyuan Jia, and Neil Zhenqiang Gong. "PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning". In USENIX Security Symposium, 2022.
- Jinyuan Jia, Yupei Liu, and Neil Zhenqiang Gong. "BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning". In IEEE Symposium on Security and Privacy, 2022.
- Hongbin Liu*, Jinyuan Jia*, Wenjie Qu, and Neil Zhenqiang Gong. "EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning". In ACM Conference on Computer and Communications Security (CCS), 2021. *Equal contribution
Secure and privacy-preserving federated analytics
- Secure federated learning [code and data] [slides] [talk on YouTube]
- Yuchen Yang, Bo Hui, Haolin Yuan, Neil Gong, and Yinzhi Cao. "PrivateFL: Accurate, Differentially Private Federated Learning via Personalized Data Transformation". In USENIX Security Symposium, 2023.
- Xiaoyu Cao, Jinyuan Jia, Zaixi Zhang, and Neil Zhenqiang Gong. "FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information". IEEE Symposium on Security and Privacy, 2023.
- Xiaoyu Cao, Zaixi Zhang, Jinyuan Jia, and Neil Zhenqiang Gong. "FLCert: Provably Secure Federated Learning against Poisoning Attacks". IEEE Transactions on Information Forensics and Security (TIFS), accepted, 2022.
- Zaixi Zhang, Xiaoyu Cao, Jinayuan Jia, and Neil Zhenqiang Gong. "FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients". In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2022.
- Xiaoyu Cao and Neil Zhenqiang Gong. "MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients". In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022.
- Yongji Wu, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. "Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data". In USENIX Security Symposium, 2022.
- Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. "Data Poisoning Attacks to Local Differential Privacy Protocols". In USENIX Security Symposium, 2021.
- Minghong Fang, Minghao Sun, Qi Li, Neil Zhenqiang Gong, Jin Tian, and Jia Liu. "Data Poisoning Attacks and Defenses to Crowdsourcing Systems". In The Web Conference (WWW), 2021.
- Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. "Provably Secure Federated Learning against Malicious Clients". In AAAI Conference on Artificial Intelligence (AAAI), 2021.
- Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong. "FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping". In ISOC Network and Distributed System Security Symposium (NDSS), 2021. [Code]
- Minghong Fang*, Xiaoyu Cao*, Jinyuan Jia, and Neil Zhenqiang Gong. "Local Model Poisoning Attacks to Byzantine-Robust Federated Learning". In USENIX Security Symposium, 2020. [Code] *Equal contribution
- Jinyuan Jia and Neil Zhenqiang Gong. "Calibrate: Frequency Estimation and Heavy Hitter Identification with Local Differential Privacy via Incorporating Prior Knowledge". In IEEE International Conference on Computer Communications (INFOCOM), 2019.
Secure and privacy-preserving graph-based methods
- Binghui Wang, Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. "Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation". In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2021.
- Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, and Yang Zhang. "Stealing Links from Graph Neural Networks". In USENIX Security Symposium, 2021.
- Zaixi Zhang*, Jinyuan Jia*, Binghui Wang, and Neil Zhenqiang Gong. "Backdoor Attacks to Graph Neural Networks". In ACM Symposium on Access Control Models and Technologies (SACMAT), 2021. *Equal contribution
- Jinyuan Jia*, Binghui Wang*, Xiaoyu Cao, and Neil Zhenqiang Gong. "Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing". In The Web Conference (WWW), 2020. *Equal contribution
- Binghui Wang and Neil Zhenqiang Gong. "Attacking Graph-based Classification via Manipulating the Graph Structure". In ACM Conference on Computer and Communications Security (CCS), 2019.
Secure and privacy-preserving recommender systems
- Jinyuan Jia, Yupei Liu, Yuepeng Hu, and Neil Zhenqiang Gong. "PORE: Provably Robust Recommender Systems against Data Poisoning Attacks". In USENIX Security Symposium, 2023.
- Hai Huang, Jiaming Mu, Neil Zhenqiang Gong, Qi Li, Bin Liu, and Mingwei Xu. "Data Poisoning Attacks to Deep Learning Based Recommender Systems". In ISOC Network and Distributed System Security Symposium (NDSS), 2021.
- Minghong Fang, Neil Zhenqiang Gong, and Jia Liu. "Influence Function based Data Poisoning Attacks to Top-N Recommender Systems". In The Web Conference (WWW), 2020.
-
Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, and Jia Liu. "Poisoning Attacks to Graph-Based Recommender Systems". In Annual Computer Security Applications Conference (ACSAC), 2018.
-
Guolei Yang, Neil Zhenqiang Gong, and Ying Cai. "Fake Co-visitation Injection Attacks to Recommender Systems". In ISOC Network and Distributed System Security Symposium (NDSS), 2017.
Jinyuan Jia, Neil Zhenqiang Gong. "AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning". In USENIX Security Symposium, 2018.
Code and data are available [here].-
Neil Zhenqiang Gong, Bin Liu. "Attribute Inference Attacks in Online Social Networks". ACM Transactions on Privacy and Security (TOPS), 21(1), 2018.
-
Jinyuan Jia, Binghui Wang, Le Zhang, Neil Zhenqiang Gong. "AttriInfer: Inferring User Attributes in Online Social Networks Using Markov Random Fields". In International World Wide Web Conference (WWW), 2017.
-
Neil Zhenqiang Gong, Bin Liu. "You are Who You Know and How You Behave: Attribute Inference Attacks via Users' Social Friends and Behaviors". In USENIX Security Symposium, 2016.
- Bin Liu, Deguang Kong, Lei Cen, Neil Zhenqiang Gong, Hongxia Jin, and Hui Xiong. "Personalized Mobile App Recommendation: Reconciling App Functionality and User Privacy Preference. In ACM International Conference on Web Search and Data Mining (WSDM), 2015.
Secure and privacy-preserving machine learning as a service
- Bo Hui, Yuchen Yang, Haolin Yuan, Philippe Burlina, Neil Zhenqiang Gong, and Yinzhi Cao. "Practical Blind Membership Inference Attack via Differential Comparisons". In ISOC Network and Distributed System Security Symposium (NDSS), 2021.
- Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, and Neil Zhenqiang Gong. "MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples". In ACM Conference on Computer and Communications Security (CCS), 2019.
Code and data are available [here]. - Binghui Wang and Neil Zhenqiang Gong. "Stealing Hyperparameters in Machine Learning". In IEEE Symposium on Security and Privacy (IEEE S&P), 2018.