Introduction
Developing resource-efficient solutions (e.g., in terms of networking and compute) for Federated Learning.
Summary
Federated Learning (FL) is an ML technique that trains an algorithm across multiple decentralized edge devices or servers. Each device or server has its own local data, and the training procedure takes place without exchanging the local data of the participants. Instead, participants perform local optimization steps using their private data and share only parameter updates.
FL naturally introduces overheads including bandwidth for exchanging parameter updates and compute burden on the participants. Our research is focused on reducing these overheads and taking FL a step forward towards wide adoption.
Researchers
External Researchers
- Amit Portnoy [Ben-Gurion University]
- Gal Mendelson [Stanford]
- Kfir Y. Levy [Technion]
- Michael Mitzenmacher [Harvard]
- Ran Ben Basat [University College London]
- Ron Dorfman [VMware and Technion]