Trustworthy Federated Learning!
I will be giving a series of talks on "Trustworthy and Scalable Federated Learning" to highlight several exciting new results from our group.
- Invited talk at FL-ICML'21 
- Invited seminar at Berkeley Laboratory for Information and System Sciences (BLISS) 
- Invited seminar at Federated Learning One World Seminar (FLOW) [Video] 
- Keynote at AI summit conference in Korea 
- Invited talk at CCF Advanced Disciplines Lecture on Privacy Preserving Machine Learning (Institute of Computing Technology of the Chinese Academy of Science) 
Here is a video of one of the talks:
Trustworthy and Scalable Federated Learning
Federated learning (FL) is a promising framework for enabling privacy preserving machine learning across many decentralized users. Its key idea is to leverage local training at each user without the need for centralizing/moving any device's dataset in order to protect users’ privacy. In this talk, I will highlight several exciting research challenges for making such a decentralized system trustworthy and scalable to a large number of resource-constrained users. In particular, I will discuss three directions: (1) resilient and secure model aggregation, which is a key component and performance bottleneck in FL; (2) FL of large models, via knowledge transfer, over resource-constrained users; and (3) FedML, our open-source research library and benchmarking ecosystem for FL research (fedml.ai).
This talk is based on several papers: TurboAggregate (JSAIT’21, arXiv:2002.04156), Byzantine-Resilient Secure Federated Learning (JSAC’20, arXiv:2007.11115), FedGKT (NeurIPS’20, arXiv:2007.14513), FedNAS (CVPR-NAS’20, arXiv:2004.08546), FedML (NeurIPS-SpicyFL’20, arXiv:2007.13518), and FedGraphNN (ICLR - DPML 2021 & MLSys - GNNSys'21).
 
            