Secure Federated Learning
Federated learning is a distributed learning framework that enables many clients to jointly train a model without sending their raw training data to a central server. Due to its distributed nature, federated learning is fundamentally vulnerable to poisoning attacks, in which malicious clients manipulate the training process via sending carefully crafted model updates to the server, such that the learnt global model makes incorrect predictions as the attacker desires. Moreover, a malicious client or honest-but-curious server can reconstruct a client's local training data via the model updates or global models. We study poisoning and privacy attacks to federated learning as well as defenses to provably prevent, detect, and recover from them.
Publications
Poisoning attacks
- Yueqi Xie, Minghong Fang, and Neil Zhenqiang Gong. "Model Poisoning Attacks to Federated Learning via Multi-Round Consistency". In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025.
- Xiaoguang Li, Ninghui Li, Wenhai Sun, Neil Zhenqiang Gong, and Hui Li. "Fine-grained Poisoning Attack to Local Differential Privacy Protocols for Mean and Variance Estimation". In USENIX Security Symposium, 2023.
- Xiaoyu Cao and Neil Zhenqiang Gong. "MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients". In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022.
- Yongji Wu, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. "Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data". In USENIX Security Symposium, 2022.
- Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. "Data Poisoning Attacks to Local Differential Privacy Protocols". In USENIX Security Symposium, 2021.
- Minghong Fang, Minghao Sun, Qi Li, Neil Zhenqiang Gong, Jin Tian, and Jia Liu. "Data Poisoning Attacks and Defenses to Crowdsourcing Systems". In The Web Conference (WWW), 2021.
- Minghong Fang*, Xiaoyu Cao*, Jinyuan Jia, and Neil Zhenqiang Gong. "Local Model Poisoning Attacks to Byzantine-Robust Federated Learning". In USENIX Security Symposium, 2020. [Code] *Equal contribution
Preventing poisoning attacks via Byzantine-robust and provably robust federated learning
- Minghong Fang, Xilong Wang, and Neil Zhenqiang Gong. "Provably Robust Federated Reinforcement Learning". In The Web Conference (WWW), 2025.
- Minghong Fang, Zifan Zhang, Hairi, Prashant Khanduri, Jia Liu, Songtao Lu, Yuchen Liu, and Neil Gong. "Byzantine-Robust Decentralized Federated Learning". In ACM Conference on Computer and Communications Security (CCS), 2024.
- Xiaoyu Cao, Zaixi Zhang, Jinyuan Jia, and Neil Zhenqiang Gong. "FLCert: Provably Secure Federated Learning against Poisoning Attacks". IEEE Transactions on Information Forensics and Security (TIFS), 2022.
- Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. "Provably Secure Federated Learning against Malicious Clients". In AAAI Conference on Artificial Intelligence (AAAI), 2021.
- Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong. "FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping". In ISOC Network and Distributed System Security Symposium (NDSS), 2021. [Code]
Detecting malicious clients
- Yueqi Xie, Minghong Fang, and Neil Zhenqiang Gong. "FedREDefense: Defending against Model Poisoning Attacks for Federated Learning using Model Update Reconstruction Error". In International Conference on Machine Learning (ICML), 2024.
- Zaixi Zhang, Xiaoyu Cao, Jinayuan Jia, and Neil Zhenqiang Gong. "FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients". In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2022.
Recovering from poisoning attacks
- Xiaoyu Cao, Jinyuan Jia, Zaixi Zhang, and Neil Zhenqiang Gong. "FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information". IEEE Symposium on Security and Privacy, 2023.
Privacy-preserving federated learning
- Yuchen Yang, Bo Hui, Haolin Yuan, Neil Gong, and Yinzhi Cao. "PrivateFL: Accurate, Differentially Private Federated Learning via Personalized Data Transformation". In USENIX Security Symposium, 2023.
- Jinyuan Jia and Neil Zhenqiang Gong. "Calibrate: Frequency Estimation and Heavy Hitter Identification with Local Differential Privacy via Incorporating Prior Knowledge". In IEEE International Conference on Computer Communications (INFOCOM), 2019.
Talks
- Talks on secure federated learning were given at many universities, industry labs, and workshops. The talk given at Purdue is available on YouTube [here]
Code and Data
Slides