Secure Federated Learning

Federated learning is a distributed learning framework that enables many clients to jointly train a model without sending their raw training data to a central server. Due to its distributed nature, federated learning is fundamentally vulnerable to poisoning attacks, in which malicious clients manipulate the training process via sending carefully crafted model updates to the server, such that the learnt global model makes incorrect predictions as the attacker desires. Moreover, a malicious client or honest-but-curious server can reconstruct a client's local training data via the model updates or global models. We study poisoning and privacy attacks to federated learning as well as defenses to provably prevent, detect, and recover from them.

Publications

Poisoning attacks

Preventing poisoning attacks via Byzantine-robust and provably robust federated learning

Detecting malicious clients

Recovering from poisoning attacks

Privacy-preserving federated learning

Talks

  • Talks on secure federated learning were given at many universities, industry labs, and workshops. The talk given at Purdue is available on YouTube [here]

Code and Data

Slides